Test Report: KVM_Linux_containerd 18771

                    
                      d8f44c85dc50f37f8a74f4a275902bf69829aaa8:2024-04-29:34254
                    
                

Test fail (1/325)

Order failed test Duration
39 TestAddons/parallel/NvidiaDevicePlugin 9.02
x
+
TestAddons/parallel/NvidiaDevicePlugin (9.02s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-hdg5d" [8d78a716-78c0-4ec1-aa75-4a1757524a08] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005808784s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-051772
addons_test.go:955: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-051772: exit status 11 (283.812616ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-29T12:28:46Z" level=error msg="stat /run/containerd/runc/k8s.io/ab6822d120e0808af99c5174c42d49fd41cedf0946160bf6ef0345e689d0827e: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:956: failed to disable nvidia-device-plugin: args "out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-051772" : exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-051772 -n addons-051772
helpers_test.go:244: <<< TestAddons/parallel/NvidiaDevicePlugin FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/NvidiaDevicePlugin]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-051772 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-051772 logs -n 25: (1.884558894s)
helpers_test.go:252: TestAddons/parallel/NvidiaDevicePlugin logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-726309 | jenkins | v1.33.0 | 29 Apr 24 12:23 UTC |                     |
	|         | -p download-only-726309                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.0 | 29 Apr 24 12:24 UTC | 29 Apr 24 12:24 UTC |
	| delete  | -p download-only-726309                                                                     | download-only-726309 | jenkins | v1.33.0 | 29 Apr 24 12:24 UTC | 29 Apr 24 12:24 UTC |
	| start   | -o=json --download-only                                                                     | download-only-351879 | jenkins | v1.33.0 | 29 Apr 24 12:24 UTC |                     |
	|         | -p download-only-351879                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.0 | 29 Apr 24 12:24 UTC | 29 Apr 24 12:24 UTC |
	| delete  | -p download-only-351879                                                                     | download-only-351879 | jenkins | v1.33.0 | 29 Apr 24 12:24 UTC | 29 Apr 24 12:24 UTC |
	| delete  | -p download-only-726309                                                                     | download-only-726309 | jenkins | v1.33.0 | 29 Apr 24 12:24 UTC | 29 Apr 24 12:24 UTC |
	| delete  | -p download-only-351879                                                                     | download-only-351879 | jenkins | v1.33.0 | 29 Apr 24 12:24 UTC | 29 Apr 24 12:24 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-643325 | jenkins | v1.33.0 | 29 Apr 24 12:24 UTC |                     |
	|         | binary-mirror-643325                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:36767                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-643325                                                                     | binary-mirror-643325 | jenkins | v1.33.0 | 29 Apr 24 12:24 UTC | 29 Apr 24 12:24 UTC |
	| addons  | disable dashboard -p                                                                        | addons-051772        | jenkins | v1.33.0 | 29 Apr 24 12:24 UTC |                     |
	|         | addons-051772                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-051772        | jenkins | v1.33.0 | 29 Apr 24 12:24 UTC |                     |
	|         | addons-051772                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-051772 --wait=true                                                                | addons-051772        | jenkins | v1.33.0 | 29 Apr 24 12:24 UTC | 29 Apr 24 12:28 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-051772        | jenkins | v1.33.0 | 29 Apr 24 12:28 UTC | 29 Apr 24 12:28 UTC |
	|         | addons-051772                                                                               |                      |         |         |                     |                     |
	| addons  | addons-051772 addons                                                                        | addons-051772        | jenkins | v1.33.0 | 29 Apr 24 12:28 UTC | 29 Apr 24 12:28 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-051772 ip                                                                            | addons-051772        | jenkins | v1.33.0 | 29 Apr 24 12:28 UTC | 29 Apr 24 12:28 UTC |
	| addons  | addons-051772 addons disable                                                                | addons-051772        | jenkins | v1.33.0 | 29 Apr 24 12:28 UTC | 29 Apr 24 12:28 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-051772 addons disable                                                                | addons-051772        | jenkins | v1.33.0 | 29 Apr 24 12:28 UTC | 29 Apr 24 12:28 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-051772 ssh curl -s                                                                   | addons-051772        | jenkins | v1.33.0 | 29 Apr 24 12:28 UTC | 29 Apr 24 12:28 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-051772 ip                                                                            | addons-051772        | jenkins | v1.33.0 | 29 Apr 24 12:28 UTC | 29 Apr 24 12:28 UTC |
	| addons  | addons-051772 addons disable                                                                | addons-051772        | jenkins | v1.33.0 | 29 Apr 24 12:28 UTC | 29 Apr 24 12:28 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-051772 addons disable                                                                | addons-051772        | jenkins | v1.33.0 | 29 Apr 24 12:28 UTC | 29 Apr 24 12:28 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-051772        | jenkins | v1.33.0 | 29 Apr 24 12:28 UTC |                     |
	|         | -p addons-051772                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-051772 ssh cat                                                                       | addons-051772        | jenkins | v1.33.0 | 29 Apr 24 12:28 UTC |                     |
	|         | /opt/local-path-provisioner/pvc-3e9f07d3-7c03-4974-9a62-4df0aaccce94_default_test-pvc/file1 |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 12:24:27
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 12:24:27.324415   90765 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:24:27.324539   90765 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:24:27.324549   90765 out.go:304] Setting ErrFile to fd 2...
	I0429 12:24:27.324553   90765 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:24:27.324758   90765 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18771-82690/.minikube/bin
	I0429 12:24:27.325403   90765 out.go:298] Setting JSON to false
	I0429 12:24:27.326207   90765 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7611,"bootTime":1714385856,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 12:24:27.326267   90765 start.go:139] virtualization: kvm guest
	I0429 12:24:27.328384   90765 out.go:177] * [addons-051772] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 12:24:27.329938   90765 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 12:24:27.329945   90765 notify.go:220] Checking for updates...
	I0429 12:24:27.331471   90765 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 12:24:27.333078   90765 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18771-82690/kubeconfig
	I0429 12:24:27.334545   90765 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18771-82690/.minikube
	I0429 12:24:27.336064   90765 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 12:24:27.337536   90765 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 12:24:27.339088   90765 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 12:24:27.369193   90765 out.go:177] * Using the kvm2 driver based on user configuration
	I0429 12:24:27.370490   90765 start.go:297] selected driver: kvm2
	I0429 12:24:27.370507   90765 start.go:901] validating driver "kvm2" against <nil>
	I0429 12:24:27.370517   90765 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 12:24:27.371176   90765 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 12:24:27.371269   90765 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18771-82690/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 12:24:27.385309   90765 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 12:24:27.385378   90765 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 12:24:27.385590   90765 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 12:24:27.385632   90765 cni.go:84] Creating CNI manager for ""
	I0429 12:24:27.385644   90765 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0429 12:24:27.385652   90765 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 12:24:27.385706   90765 start.go:340] cluster config:
	{Name:addons-051772 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-051772 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
ontainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:24:27.385797   90765 iso.go:125] acquiring lock: {Name:mkedacf31368d400e657fc8150aebe85f02fab3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 12:24:27.387544   90765 out.go:177] * Starting "addons-051772" primary control-plane node in "addons-051772" cluster
	I0429 12:24:27.388813   90765 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime containerd
	I0429 12:24:27.388839   90765 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18771-82690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-containerd-overlay2-amd64.tar.lz4
	I0429 12:24:27.388849   90765 cache.go:56] Caching tarball of preloaded images
	I0429 12:24:27.388908   90765 preload.go:173] Found /home/jenkins/minikube-integration/18771-82690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 12:24:27.388918   90765 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on containerd
	I0429 12:24:27.389188   90765 profile.go:143] Saving config to /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/config.json ...
	I0429 12:24:27.389214   90765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/config.json: {Name:mk79d9a5a38de037b5a3e79abd9884ab9c186668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:24:27.389367   90765 start.go:360] acquireMachinesLock for addons-051772: {Name:mka638ce84a2d4b6f750d2cc036bc1c951f9256e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 12:24:27.389420   90765 start.go:364] duration metric: took 35.942µs to acquireMachinesLock for "addons-051772"
	I0429 12:24:27.389438   90765 start.go:93] Provisioning new machine with config: &{Name:addons-051772 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:addons-051772 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0429 12:24:27.389504   90765 start.go:125] createHost starting for "" (driver="kvm2")
	I0429 12:24:27.391266   90765 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0429 12:24:27.391384   90765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:24:27.391417   90765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:24:27.404957   90765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33705
	I0429 12:24:27.405470   90765 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:24:27.406088   90765 main.go:141] libmachine: Using API Version  1
	I0429 12:24:27.406110   90765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:24:27.406482   90765 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:24:27.406697   90765 main.go:141] libmachine: (addons-051772) Calling .GetMachineName
	I0429 12:24:27.406849   90765 main.go:141] libmachine: (addons-051772) Calling .DriverName
	I0429 12:24:27.406999   90765 start.go:159] libmachine.API.Create for "addons-051772" (driver="kvm2")
	I0429 12:24:27.407029   90765 client.go:168] LocalClient.Create starting
	I0429 12:24:27.407075   90765 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18771-82690/.minikube/certs/ca.pem
	I0429 12:24:27.466568   90765 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18771-82690/.minikube/certs/cert.pem
	I0429 12:24:27.629482   90765 main.go:141] libmachine: Running pre-create checks...
	I0429 12:24:27.629506   90765 main.go:141] libmachine: (addons-051772) Calling .PreCreateCheck
	I0429 12:24:27.630055   90765 main.go:141] libmachine: (addons-051772) Calling .GetConfigRaw
	I0429 12:24:27.630475   90765 main.go:141] libmachine: Creating machine...
	I0429 12:24:27.630489   90765 main.go:141] libmachine: (addons-051772) Calling .Create
	I0429 12:24:27.630655   90765 main.go:141] libmachine: (addons-051772) Creating KVM machine...
	I0429 12:24:27.631872   90765 main.go:141] libmachine: (addons-051772) DBG | found existing default KVM network
	I0429 12:24:27.632679   90765 main.go:141] libmachine: (addons-051772) DBG | I0429 12:24:27.632494   90787 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0429 12:24:27.632715   90765 main.go:141] libmachine: (addons-051772) DBG | created network xml: 
	I0429 12:24:27.632732   90765 main.go:141] libmachine: (addons-051772) DBG | <network>
	I0429 12:24:27.632753   90765 main.go:141] libmachine: (addons-051772) DBG |   <name>mk-addons-051772</name>
	I0429 12:24:27.632765   90765 main.go:141] libmachine: (addons-051772) DBG |   <dns enable='no'/>
	I0429 12:24:27.632779   90765 main.go:141] libmachine: (addons-051772) DBG |   
	I0429 12:24:27.632788   90765 main.go:141] libmachine: (addons-051772) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0429 12:24:27.632793   90765 main.go:141] libmachine: (addons-051772) DBG |     <dhcp>
	I0429 12:24:27.632827   90765 main.go:141] libmachine: (addons-051772) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0429 12:24:27.632847   90765 main.go:141] libmachine: (addons-051772) DBG |     </dhcp>
	I0429 12:24:27.632859   90765 main.go:141] libmachine: (addons-051772) DBG |   </ip>
	I0429 12:24:27.632883   90765 main.go:141] libmachine: (addons-051772) DBG |   
	I0429 12:24:27.632896   90765 main.go:141] libmachine: (addons-051772) DBG | </network>
	I0429 12:24:27.632905   90765 main.go:141] libmachine: (addons-051772) DBG | 
	I0429 12:24:27.637866   90765 main.go:141] libmachine: (addons-051772) DBG | trying to create private KVM network mk-addons-051772 192.168.39.0/24...
	I0429 12:24:27.702432   90765 main.go:141] libmachine: (addons-051772) DBG | private KVM network mk-addons-051772 192.168.39.0/24 created
	I0429 12:24:27.702467   90765 main.go:141] libmachine: (addons-051772) Setting up store path in /home/jenkins/minikube-integration/18771-82690/.minikube/machines/addons-051772 ...
	I0429 12:24:27.702506   90765 main.go:141] libmachine: (addons-051772) Building disk image from file:///home/jenkins/minikube-integration/18771-82690/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0429 12:24:27.702522   90765 main.go:141] libmachine: (addons-051772) DBG | I0429 12:24:27.702423   90787 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18771-82690/.minikube
	I0429 12:24:27.702620   90765 main.go:141] libmachine: (addons-051772) Downloading /home/jenkins/minikube-integration/18771-82690/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18771-82690/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 12:24:27.964202   90765 main.go:141] libmachine: (addons-051772) DBG | I0429 12:24:27.964079   90787 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18771-82690/.minikube/machines/addons-051772/id_rsa...
	I0429 12:24:28.067985   90765 main.go:141] libmachine: (addons-051772) DBG | I0429 12:24:28.067870   90787 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18771-82690/.minikube/machines/addons-051772/addons-051772.rawdisk...
	I0429 12:24:28.068020   90765 main.go:141] libmachine: (addons-051772) DBG | Writing magic tar header
	I0429 12:24:28.068037   90765 main.go:141] libmachine: (addons-051772) DBG | Writing SSH key tar header
	I0429 12:24:28.068050   90765 main.go:141] libmachine: (addons-051772) DBG | I0429 12:24:28.068017   90787 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18771-82690/.minikube/machines/addons-051772 ...
	I0429 12:24:28.068199   90765 main.go:141] libmachine: (addons-051772) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18771-82690/.minikube/machines/addons-051772
	I0429 12:24:28.068221   90765 main.go:141] libmachine: (addons-051772) Setting executable bit set on /home/jenkins/minikube-integration/18771-82690/.minikube/machines/addons-051772 (perms=drwx------)
	I0429 12:24:28.068229   90765 main.go:141] libmachine: (addons-051772) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18771-82690/.minikube/machines
	I0429 12:24:28.068237   90765 main.go:141] libmachine: (addons-051772) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18771-82690/.minikube
	I0429 12:24:28.068242   90765 main.go:141] libmachine: (addons-051772) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18771-82690
	I0429 12:24:28.068251   90765 main.go:141] libmachine: (addons-051772) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0429 12:24:28.068256   90765 main.go:141] libmachine: (addons-051772) DBG | Checking permissions on dir: /home/jenkins
	I0429 12:24:28.068262   90765 main.go:141] libmachine: (addons-051772) DBG | Checking permissions on dir: /home
	I0429 12:24:28.068266   90765 main.go:141] libmachine: (addons-051772) DBG | Skipping /home - not owner
	I0429 12:24:28.068276   90765 main.go:141] libmachine: (addons-051772) Setting executable bit set on /home/jenkins/minikube-integration/18771-82690/.minikube/machines (perms=drwxr-xr-x)
	I0429 12:24:28.068285   90765 main.go:141] libmachine: (addons-051772) Setting executable bit set on /home/jenkins/minikube-integration/18771-82690/.minikube (perms=drwxr-xr-x)
	I0429 12:24:28.068292   90765 main.go:141] libmachine: (addons-051772) Setting executable bit set on /home/jenkins/minikube-integration/18771-82690 (perms=drwxrwxr-x)
	I0429 12:24:28.068304   90765 main.go:141] libmachine: (addons-051772) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0429 12:24:28.068337   90765 main.go:141] libmachine: (addons-051772) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0429 12:24:28.068358   90765 main.go:141] libmachine: (addons-051772) Creating domain...
	I0429 12:24:28.069406   90765 main.go:141] libmachine: (addons-051772) define libvirt domain using xml: 
	I0429 12:24:28.069432   90765 main.go:141] libmachine: (addons-051772) <domain type='kvm'>
	I0429 12:24:28.069442   90765 main.go:141] libmachine: (addons-051772)   <name>addons-051772</name>
	I0429 12:24:28.069450   90765 main.go:141] libmachine: (addons-051772)   <memory unit='MiB'>4000</memory>
	I0429 12:24:28.069477   90765 main.go:141] libmachine: (addons-051772)   <vcpu>2</vcpu>
	I0429 12:24:28.069494   90765 main.go:141] libmachine: (addons-051772)   <features>
	I0429 12:24:28.069508   90765 main.go:141] libmachine: (addons-051772)     <acpi/>
	I0429 12:24:28.069517   90765 main.go:141] libmachine: (addons-051772)     <apic/>
	I0429 12:24:28.069529   90765 main.go:141] libmachine: (addons-051772)     <pae/>
	I0429 12:24:28.069539   90765 main.go:141] libmachine: (addons-051772)     
	I0429 12:24:28.069550   90765 main.go:141] libmachine: (addons-051772)   </features>
	I0429 12:24:28.069557   90765 main.go:141] libmachine: (addons-051772)   <cpu mode='host-passthrough'>
	I0429 12:24:28.069598   90765 main.go:141] libmachine: (addons-051772)   
	I0429 12:24:28.069621   90765 main.go:141] libmachine: (addons-051772)   </cpu>
	I0429 12:24:28.069628   90765 main.go:141] libmachine: (addons-051772)   <os>
	I0429 12:24:28.069636   90765 main.go:141] libmachine: (addons-051772)     <type>hvm</type>
	I0429 12:24:28.069642   90765 main.go:141] libmachine: (addons-051772)     <boot dev='cdrom'/>
	I0429 12:24:28.069649   90765 main.go:141] libmachine: (addons-051772)     <boot dev='hd'/>
	I0429 12:24:28.069654   90765 main.go:141] libmachine: (addons-051772)     <bootmenu enable='no'/>
	I0429 12:24:28.069661   90765 main.go:141] libmachine: (addons-051772)   </os>
	I0429 12:24:28.069666   90765 main.go:141] libmachine: (addons-051772)   <devices>
	I0429 12:24:28.069674   90765 main.go:141] libmachine: (addons-051772)     <disk type='file' device='cdrom'>
	I0429 12:24:28.069682   90765 main.go:141] libmachine: (addons-051772)       <source file='/home/jenkins/minikube-integration/18771-82690/.minikube/machines/addons-051772/boot2docker.iso'/>
	I0429 12:24:28.069693   90765 main.go:141] libmachine: (addons-051772)       <target dev='hdc' bus='scsi'/>
	I0429 12:24:28.069698   90765 main.go:141] libmachine: (addons-051772)       <readonly/>
	I0429 12:24:28.069706   90765 main.go:141] libmachine: (addons-051772)     </disk>
	I0429 12:24:28.069727   90765 main.go:141] libmachine: (addons-051772)     <disk type='file' device='disk'>
	I0429 12:24:28.069744   90765 main.go:141] libmachine: (addons-051772)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0429 12:24:28.069763   90765 main.go:141] libmachine: (addons-051772)       <source file='/home/jenkins/minikube-integration/18771-82690/.minikube/machines/addons-051772/addons-051772.rawdisk'/>
	I0429 12:24:28.069775   90765 main.go:141] libmachine: (addons-051772)       <target dev='hda' bus='virtio'/>
	I0429 12:24:28.069785   90765 main.go:141] libmachine: (addons-051772)     </disk>
	I0429 12:24:28.069797   90765 main.go:141] libmachine: (addons-051772)     <interface type='network'>
	I0429 12:24:28.069810   90765 main.go:141] libmachine: (addons-051772)       <source network='mk-addons-051772'/>
	I0429 12:24:28.069821   90765 main.go:141] libmachine: (addons-051772)       <model type='virtio'/>
	I0429 12:24:28.069841   90765 main.go:141] libmachine: (addons-051772)     </interface>
	I0429 12:24:28.069860   90765 main.go:141] libmachine: (addons-051772)     <interface type='network'>
	I0429 12:24:28.069873   90765 main.go:141] libmachine: (addons-051772)       <source network='default'/>
	I0429 12:24:28.069884   90765 main.go:141] libmachine: (addons-051772)       <model type='virtio'/>
	I0429 12:24:28.069895   90765 main.go:141] libmachine: (addons-051772)     </interface>
	I0429 12:24:28.069901   90765 main.go:141] libmachine: (addons-051772)     <serial type='pty'>
	I0429 12:24:28.069913   90765 main.go:141] libmachine: (addons-051772)       <target port='0'/>
	I0429 12:24:28.069923   90765 main.go:141] libmachine: (addons-051772)     </serial>
	I0429 12:24:28.069933   90765 main.go:141] libmachine: (addons-051772)     <console type='pty'>
	I0429 12:24:28.069955   90765 main.go:141] libmachine: (addons-051772)       <target type='serial' port='0'/>
	I0429 12:24:28.069968   90765 main.go:141] libmachine: (addons-051772)     </console>
	I0429 12:24:28.069980   90765 main.go:141] libmachine: (addons-051772)     <rng model='virtio'>
	I0429 12:24:28.069993   90765 main.go:141] libmachine: (addons-051772)       <backend model='random'>/dev/random</backend>
	I0429 12:24:28.070004   90765 main.go:141] libmachine: (addons-051772)     </rng>
	I0429 12:24:28.070014   90765 main.go:141] libmachine: (addons-051772)     
	I0429 12:24:28.070023   90765 main.go:141] libmachine: (addons-051772)     
	I0429 12:24:28.070038   90765 main.go:141] libmachine: (addons-051772)   </devices>
	I0429 12:24:28.070054   90765 main.go:141] libmachine: (addons-051772) </domain>
	I0429 12:24:28.070071   90765 main.go:141] libmachine: (addons-051772) 
	I0429 12:24:28.074381   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:39:85:28 in network default
	I0429 12:24:28.075062   90765 main.go:141] libmachine: (addons-051772) Ensuring networks are active...
	I0429 12:24:28.075086   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:28.075815   90765 main.go:141] libmachine: (addons-051772) Ensuring network default is active
	I0429 12:24:28.076091   90765 main.go:141] libmachine: (addons-051772) Ensuring network mk-addons-051772 is active
	I0429 12:24:28.076632   90765 main.go:141] libmachine: (addons-051772) Getting domain xml...
	I0429 12:24:28.077347   90765 main.go:141] libmachine: (addons-051772) Creating domain...
	I0429 12:24:29.246208   90765 main.go:141] libmachine: (addons-051772) Waiting to get IP...
	I0429 12:24:29.246969   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:29.247370   90765 main.go:141] libmachine: (addons-051772) DBG | unable to find current IP address of domain addons-051772 in network mk-addons-051772
	I0429 12:24:29.247429   90765 main.go:141] libmachine: (addons-051772) DBG | I0429 12:24:29.247366   90787 retry.go:31] will retry after 265.805854ms: waiting for machine to come up
	I0429 12:24:29.514928   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:29.515387   90765 main.go:141] libmachine: (addons-051772) DBG | unable to find current IP address of domain addons-051772 in network mk-addons-051772
	I0429 12:24:29.515414   90765 main.go:141] libmachine: (addons-051772) DBG | I0429 12:24:29.515339   90787 retry.go:31] will retry after 287.918283ms: waiting for machine to come up
	I0429 12:24:29.804951   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:29.805452   90765 main.go:141] libmachine: (addons-051772) DBG | unable to find current IP address of domain addons-051772 in network mk-addons-051772
	I0429 12:24:29.805476   90765 main.go:141] libmachine: (addons-051772) DBG | I0429 12:24:29.805408   90787 retry.go:31] will retry after 440.754515ms: waiting for machine to come up
	I0429 12:24:30.248178   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:30.248563   90765 main.go:141] libmachine: (addons-051772) DBG | unable to find current IP address of domain addons-051772 in network mk-addons-051772
	I0429 12:24:30.248621   90765 main.go:141] libmachine: (addons-051772) DBG | I0429 12:24:30.248520   90787 retry.go:31] will retry after 498.019374ms: waiting for machine to come up
	I0429 12:24:30.748220   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:30.748592   90765 main.go:141] libmachine: (addons-051772) DBG | unable to find current IP address of domain addons-051772 in network mk-addons-051772
	I0429 12:24:30.748621   90765 main.go:141] libmachine: (addons-051772) DBG | I0429 12:24:30.748545   90787 retry.go:31] will retry after 625.698442ms: waiting for machine to come up
	I0429 12:24:31.375544   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:31.376035   90765 main.go:141] libmachine: (addons-051772) DBG | unable to find current IP address of domain addons-051772 in network mk-addons-051772
	I0429 12:24:31.376087   90765 main.go:141] libmachine: (addons-051772) DBG | I0429 12:24:31.375980   90787 retry.go:31] will retry after 773.691528ms: waiting for machine to come up
	I0429 12:24:32.150923   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:32.151308   90765 main.go:141] libmachine: (addons-051772) DBG | unable to find current IP address of domain addons-051772 in network mk-addons-051772
	I0429 12:24:32.151334   90765 main.go:141] libmachine: (addons-051772) DBG | I0429 12:24:32.151266   90787 retry.go:31] will retry after 802.251542ms: waiting for machine to come up
	I0429 12:24:32.954866   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:32.955470   90765 main.go:141] libmachine: (addons-051772) DBG | unable to find current IP address of domain addons-051772 in network mk-addons-051772
	I0429 12:24:32.955503   90765 main.go:141] libmachine: (addons-051772) DBG | I0429 12:24:32.955386   90787 retry.go:31] will retry after 933.163079ms: waiting for machine to come up
	I0429 12:24:33.890492   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:33.890903   90765 main.go:141] libmachine: (addons-051772) DBG | unable to find current IP address of domain addons-051772 in network mk-addons-051772
	I0429 12:24:33.890936   90765 main.go:141] libmachine: (addons-051772) DBG | I0429 12:24:33.890863   90787 retry.go:31] will retry after 1.146640782s: waiting for machine to come up
	I0429 12:24:35.038703   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:35.039194   90765 main.go:141] libmachine: (addons-051772) DBG | unable to find current IP address of domain addons-051772 in network mk-addons-051772
	I0429 12:24:35.039218   90765 main.go:141] libmachine: (addons-051772) DBG | I0429 12:24:35.039136   90787 retry.go:31] will retry after 2.22410604s: waiting for machine to come up
	I0429 12:24:37.264578   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:37.264939   90765 main.go:141] libmachine: (addons-051772) DBG | unable to find current IP address of domain addons-051772 in network mk-addons-051772
	I0429 12:24:37.264969   90765 main.go:141] libmachine: (addons-051772) DBG | I0429 12:24:37.264893   90787 retry.go:31] will retry after 2.419948043s: waiting for machine to come up
	I0429 12:24:39.687576   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:39.688186   90765 main.go:141] libmachine: (addons-051772) DBG | unable to find current IP address of domain addons-051772 in network mk-addons-051772
	I0429 12:24:39.688214   90765 main.go:141] libmachine: (addons-051772) DBG | I0429 12:24:39.688140   90787 retry.go:31] will retry after 3.237977875s: waiting for machine to come up
	I0429 12:24:42.928078   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:42.928532   90765 main.go:141] libmachine: (addons-051772) DBG | unable to find current IP address of domain addons-051772 in network mk-addons-051772
	I0429 12:24:42.928563   90765 main.go:141] libmachine: (addons-051772) DBG | I0429 12:24:42.928476   90787 retry.go:31] will retry after 4.043844925s: waiting for machine to come up
	I0429 12:24:46.973517   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:46.973933   90765 main.go:141] libmachine: (addons-051772) DBG | unable to find current IP address of domain addons-051772 in network mk-addons-051772
	I0429 12:24:46.973970   90765 main.go:141] libmachine: (addons-051772) DBG | I0429 12:24:46.973903   90787 retry.go:31] will retry after 3.822977755s: waiting for machine to come up
	I0429 12:24:50.800812   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:50.801474   90765 main.go:141] libmachine: (addons-051772) Found IP for machine: 192.168.39.38
	I0429 12:24:50.801511   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has current primary IP address 192.168.39.38 and MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:50.801520   90765 main.go:141] libmachine: (addons-051772) Reserving static IP address...
	I0429 12:24:50.801868   90765 main.go:141] libmachine: (addons-051772) DBG | unable to find host DHCP lease matching {name: "addons-051772", mac: "52:54:00:af:60:43", ip: "192.168.39.38"} in network mk-addons-051772
	I0429 12:24:50.873116   90765 main.go:141] libmachine: (addons-051772) Reserved static IP address: 192.168.39.38
	I0429 12:24:50.873151   90765 main.go:141] libmachine: (addons-051772) Waiting for SSH to be available...
	I0429 12:24:50.873162   90765 main.go:141] libmachine: (addons-051772) DBG | Getting to WaitForSSH function...
	I0429 12:24:50.876999   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:50.877318   90765 main.go:141] libmachine: (addons-051772) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:af:60:43", ip: ""} in network mk-addons-051772
	I0429 12:24:50.877339   90765 main.go:141] libmachine: (addons-051772) DBG | unable to find defined IP address of network mk-addons-051772 interface with MAC address 52:54:00:af:60:43
	I0429 12:24:50.877570   90765 main.go:141] libmachine: (addons-051772) DBG | Using SSH client type: external
	I0429 12:24:50.877601   90765 main.go:141] libmachine: (addons-051772) DBG | Using SSH private key: /home/jenkins/minikube-integration/18771-82690/.minikube/machines/addons-051772/id_rsa (-rw-------)
	I0429 12:24:50.877654   90765 main.go:141] libmachine: (addons-051772) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18771-82690/.minikube/machines/addons-051772/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 12:24:50.877676   90765 main.go:141] libmachine: (addons-051772) DBG | About to run SSH command:
	I0429 12:24:50.877692   90765 main.go:141] libmachine: (addons-051772) DBG | exit 0
	I0429 12:24:50.881057   90765 main.go:141] libmachine: (addons-051772) DBG | SSH cmd err, output: exit status 255: 
	I0429 12:24:50.881084   90765 main.go:141] libmachine: (addons-051772) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0429 12:24:50.881107   90765 main.go:141] libmachine: (addons-051772) DBG | command : exit 0
	I0429 12:24:50.881123   90765 main.go:141] libmachine: (addons-051772) DBG | err     : exit status 255
	I0429 12:24:50.881135   90765 main.go:141] libmachine: (addons-051772) DBG | output  : 
	I0429 12:24:53.881778   90765 main.go:141] libmachine: (addons-051772) DBG | Getting to WaitForSSH function...
	I0429 12:24:53.884612   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:53.885140   90765 main.go:141] libmachine: (addons-051772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:60:43", ip: ""} in network mk-addons-051772: {Iface:virbr1 ExpiryTime:2024-04-29 13:24:43 +0000 UTC Type:0 Mac:52:54:00:af:60:43 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:addons-051772 Clientid:01:52:54:00:af:60:43}
	I0429 12:24:53.885182   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined IP address 192.168.39.38 and MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:53.885273   90765 main.go:141] libmachine: (addons-051772) DBG | Using SSH client type: external
	I0429 12:24:53.885320   90765 main.go:141] libmachine: (addons-051772) DBG | Using SSH private key: /home/jenkins/minikube-integration/18771-82690/.minikube/machines/addons-051772/id_rsa (-rw-------)
	I0429 12:24:53.885356   90765 main.go:141] libmachine: (addons-051772) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18771-82690/.minikube/machines/addons-051772/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 12:24:53.885371   90765 main.go:141] libmachine: (addons-051772) DBG | About to run SSH command:
	I0429 12:24:53.885412   90765 main.go:141] libmachine: (addons-051772) DBG | exit 0
	I0429 12:24:54.007904   90765 main.go:141] libmachine: (addons-051772) DBG | SSH cmd err, output: <nil>: 
	I0429 12:24:54.008173   90765 main.go:141] libmachine: (addons-051772) KVM machine creation complete!
	I0429 12:24:54.008498   90765 main.go:141] libmachine: (addons-051772) Calling .GetConfigRaw
	I0429 12:24:54.009028   90765 main.go:141] libmachine: (addons-051772) Calling .DriverName
	I0429 12:24:54.009264   90765 main.go:141] libmachine: (addons-051772) Calling .DriverName
	I0429 12:24:54.009533   90765 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 12:24:54.009550   90765 main.go:141] libmachine: (addons-051772) Calling .GetState
	I0429 12:24:54.010876   90765 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 12:24:54.010893   90765 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 12:24:54.010900   90765 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 12:24:54.010945   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHHostname
	I0429 12:24:54.013102   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:54.013461   90765 main.go:141] libmachine: (addons-051772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:60:43", ip: ""} in network mk-addons-051772: {Iface:virbr1 ExpiryTime:2024-04-29 13:24:43 +0000 UTC Type:0 Mac:52:54:00:af:60:43 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:addons-051772 Clientid:01:52:54:00:af:60:43}
	I0429 12:24:54.013486   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined IP address 192.168.39.38 and MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:54.013632   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHPort
	I0429 12:24:54.013799   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHKeyPath
	I0429 12:24:54.013954   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHKeyPath
	I0429 12:24:54.014063   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHUsername
	I0429 12:24:54.014218   90765 main.go:141] libmachine: Using SSH client type: native
	I0429 12:24:54.014477   90765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0429 12:24:54.014496   90765 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 12:24:54.119086   90765 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:24:54.119117   90765 main.go:141] libmachine: Detecting the provisioner...
	I0429 12:24:54.119126   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHHostname
	I0429 12:24:54.121761   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:54.122101   90765 main.go:141] libmachine: (addons-051772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:60:43", ip: ""} in network mk-addons-051772: {Iface:virbr1 ExpiryTime:2024-04-29 13:24:43 +0000 UTC Type:0 Mac:52:54:00:af:60:43 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:addons-051772 Clientid:01:52:54:00:af:60:43}
	I0429 12:24:54.122126   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined IP address 192.168.39.38 and MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:54.122230   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHPort
	I0429 12:24:54.122437   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHKeyPath
	I0429 12:24:54.122600   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHKeyPath
	I0429 12:24:54.122728   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHUsername
	I0429 12:24:54.122874   90765 main.go:141] libmachine: Using SSH client type: native
	I0429 12:24:54.123040   90765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0429 12:24:54.123055   90765 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 12:24:54.229225   90765 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 12:24:54.229332   90765 main.go:141] libmachine: found compatible host: buildroot
	I0429 12:24:54.229345   90765 main.go:141] libmachine: Provisioning with buildroot...
	I0429 12:24:54.229354   90765 main.go:141] libmachine: (addons-051772) Calling .GetMachineName
	I0429 12:24:54.229603   90765 buildroot.go:166] provisioning hostname "addons-051772"
	I0429 12:24:54.229636   90765 main.go:141] libmachine: (addons-051772) Calling .GetMachineName
	I0429 12:24:54.229842   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHHostname
	I0429 12:24:54.232598   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:54.232949   90765 main.go:141] libmachine: (addons-051772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:60:43", ip: ""} in network mk-addons-051772: {Iface:virbr1 ExpiryTime:2024-04-29 13:24:43 +0000 UTC Type:0 Mac:52:54:00:af:60:43 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:addons-051772 Clientid:01:52:54:00:af:60:43}
	I0429 12:24:54.232973   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined IP address 192.168.39.38 and MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:54.233112   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHPort
	I0429 12:24:54.233298   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHKeyPath
	I0429 12:24:54.233446   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHKeyPath
	I0429 12:24:54.233563   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHUsername
	I0429 12:24:54.233721   90765 main.go:141] libmachine: Using SSH client type: native
	I0429 12:24:54.233902   90765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0429 12:24:54.233915   90765 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-051772 && echo "addons-051772" | sudo tee /etc/hostname
	I0429 12:24:54.351848   90765 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-051772
	
	I0429 12:24:54.351878   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHHostname
	I0429 12:24:54.354622   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:54.355016   90765 main.go:141] libmachine: (addons-051772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:60:43", ip: ""} in network mk-addons-051772: {Iface:virbr1 ExpiryTime:2024-04-29 13:24:43 +0000 UTC Type:0 Mac:52:54:00:af:60:43 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:addons-051772 Clientid:01:52:54:00:af:60:43}
	I0429 12:24:54.355045   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined IP address 192.168.39.38 and MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:54.355254   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHPort
	I0429 12:24:54.355421   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHKeyPath
	I0429 12:24:54.355545   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHKeyPath
	I0429 12:24:54.355726   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHUsername
	I0429 12:24:54.355846   90765 main.go:141] libmachine: Using SSH client type: native
	I0429 12:24:54.356032   90765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0429 12:24:54.356058   90765 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-051772' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-051772/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-051772' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 12:24:54.469871   90765 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:24:54.469903   90765 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18771-82690/.minikube CaCertPath:/home/jenkins/minikube-integration/18771-82690/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18771-82690/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18771-82690/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18771-82690/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18771-82690/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18771-82690/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18771-82690/.minikube}
	I0429 12:24:54.469985   90765 buildroot.go:174] setting up certificates
	I0429 12:24:54.470002   90765 provision.go:84] configureAuth start
	I0429 12:24:54.470021   90765 main.go:141] libmachine: (addons-051772) Calling .GetMachineName
	I0429 12:24:54.470323   90765 main.go:141] libmachine: (addons-051772) Calling .GetIP
	I0429 12:24:54.473092   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:54.473509   90765 main.go:141] libmachine: (addons-051772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:60:43", ip: ""} in network mk-addons-051772: {Iface:virbr1 ExpiryTime:2024-04-29 13:24:43 +0000 UTC Type:0 Mac:52:54:00:af:60:43 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:addons-051772 Clientid:01:52:54:00:af:60:43}
	I0429 12:24:54.473539   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined IP address 192.168.39.38 and MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:54.473705   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHHostname
	I0429 12:24:54.475991   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:54.476328   90765 main.go:141] libmachine: (addons-051772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:60:43", ip: ""} in network mk-addons-051772: {Iface:virbr1 ExpiryTime:2024-04-29 13:24:43 +0000 UTC Type:0 Mac:52:54:00:af:60:43 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:addons-051772 Clientid:01:52:54:00:af:60:43}
	I0429 12:24:54.476347   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined IP address 192.168.39.38 and MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:54.476451   90765 provision.go:143] copyHostCerts
	I0429 12:24:54.476518   90765 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18771-82690/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18771-82690/.minikube/ca.pem (1082 bytes)
	I0429 12:24:54.476661   90765 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18771-82690/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18771-82690/.minikube/cert.pem (1123 bytes)
	I0429 12:24:54.476742   90765 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18771-82690/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18771-82690/.minikube/key.pem (1679 bytes)
	I0429 12:24:54.476803   90765 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18771-82690/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18771-82690/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18771-82690/.minikube/certs/ca-key.pem org=jenkins.addons-051772 san=[127.0.0.1 192.168.39.38 addons-051772 localhost minikube]
	I0429 12:24:54.722205   90765 provision.go:177] copyRemoteCerts
	I0429 12:24:54.722285   90765 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 12:24:54.722311   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHHostname
	I0429 12:24:54.725133   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:54.725539   90765 main.go:141] libmachine: (addons-051772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:60:43", ip: ""} in network mk-addons-051772: {Iface:virbr1 ExpiryTime:2024-04-29 13:24:43 +0000 UTC Type:0 Mac:52:54:00:af:60:43 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:addons-051772 Clientid:01:52:54:00:af:60:43}
	I0429 12:24:54.725572   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined IP address 192.168.39.38 and MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:54.725732   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHPort
	I0429 12:24:54.725973   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHKeyPath
	I0429 12:24:54.726143   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHUsername
	I0429 12:24:54.726296   90765 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18771-82690/.minikube/machines/addons-051772/id_rsa Username:docker}
	I0429 12:24:54.806630   90765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-82690/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 12:24:54.833602   90765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-82690/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0429 12:24:54.859714   90765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-82690/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 12:24:54.886191   90765 provision.go:87] duration metric: took 416.170951ms to configureAuth
	I0429 12:24:54.886232   90765 buildroot.go:189] setting minikube options for container-runtime
	I0429 12:24:54.886414   90765 config.go:182] Loaded profile config "addons-051772": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0429 12:24:54.886473   90765 main.go:141] libmachine: Checking connection to Docker...
	I0429 12:24:54.886493   90765 main.go:141] libmachine: (addons-051772) Calling .GetURL
	I0429 12:24:54.887707   90765 main.go:141] libmachine: (addons-051772) DBG | Using libvirt version 6000000
	I0429 12:24:54.889994   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:54.890362   90765 main.go:141] libmachine: (addons-051772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:60:43", ip: ""} in network mk-addons-051772: {Iface:virbr1 ExpiryTime:2024-04-29 13:24:43 +0000 UTC Type:0 Mac:52:54:00:af:60:43 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:addons-051772 Clientid:01:52:54:00:af:60:43}
	I0429 12:24:54.890388   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined IP address 192.168.39.38 and MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:54.890528   90765 main.go:141] libmachine: Docker is up and running!
	I0429 12:24:54.890557   90765 main.go:141] libmachine: Reticulating splines...
	I0429 12:24:54.890578   90765 client.go:171] duration metric: took 27.483522201s to LocalClient.Create
	I0429 12:24:54.890607   90765 start.go:167] duration metric: took 27.483608636s to libmachine.API.Create "addons-051772"
	I0429 12:24:54.890620   90765 start.go:293] postStartSetup for "addons-051772" (driver="kvm2")
	I0429 12:24:54.890632   90765 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 12:24:54.890658   90765 main.go:141] libmachine: (addons-051772) Calling .DriverName
	I0429 12:24:54.890885   90765 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 12:24:54.890911   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHHostname
	I0429 12:24:54.893181   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:54.893533   90765 main.go:141] libmachine: (addons-051772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:60:43", ip: ""} in network mk-addons-051772: {Iface:virbr1 ExpiryTime:2024-04-29 13:24:43 +0000 UTC Type:0 Mac:52:54:00:af:60:43 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:addons-051772 Clientid:01:52:54:00:af:60:43}
	I0429 12:24:54.893554   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined IP address 192.168.39.38 and MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:54.893721   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHPort
	I0429 12:24:54.893885   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHKeyPath
	I0429 12:24:54.894044   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHUsername
	I0429 12:24:54.894189   90765 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18771-82690/.minikube/machines/addons-051772/id_rsa Username:docker}
	I0429 12:24:54.975152   90765 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 12:24:54.980152   90765 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 12:24:54.980208   90765 filesync.go:126] Scanning /home/jenkins/minikube-integration/18771-82690/.minikube/addons for local assets ...
	I0429 12:24:54.980262   90765 filesync.go:126] Scanning /home/jenkins/minikube-integration/18771-82690/.minikube/files for local assets ...
	I0429 12:24:54.980286   90765 start.go:296] duration metric: took 89.657124ms for postStartSetup
	I0429 12:24:54.980320   90765 main.go:141] libmachine: (addons-051772) Calling .GetConfigRaw
	I0429 12:24:54.980885   90765 main.go:141] libmachine: (addons-051772) Calling .GetIP
	I0429 12:24:54.983208   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:54.983550   90765 main.go:141] libmachine: (addons-051772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:60:43", ip: ""} in network mk-addons-051772: {Iface:virbr1 ExpiryTime:2024-04-29 13:24:43 +0000 UTC Type:0 Mac:52:54:00:af:60:43 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:addons-051772 Clientid:01:52:54:00:af:60:43}
	I0429 12:24:54.983598   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined IP address 192.168.39.38 and MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:54.983793   90765 profile.go:143] Saving config to /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/config.json ...
	I0429 12:24:54.983954   90765 start.go:128] duration metric: took 27.594439862s to createHost
	I0429 12:24:54.983975   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHHostname
	I0429 12:24:54.986255   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:54.986591   90765 main.go:141] libmachine: (addons-051772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:60:43", ip: ""} in network mk-addons-051772: {Iface:virbr1 ExpiryTime:2024-04-29 13:24:43 +0000 UTC Type:0 Mac:52:54:00:af:60:43 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:addons-051772 Clientid:01:52:54:00:af:60:43}
	I0429 12:24:54.986617   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined IP address 192.168.39.38 and MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:54.986737   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHPort
	I0429 12:24:54.986928   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHKeyPath
	I0429 12:24:54.987077   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHKeyPath
	I0429 12:24:54.987209   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHUsername
	I0429 12:24:54.987346   90765 main.go:141] libmachine: Using SSH client type: native
	I0429 12:24:54.987505   90765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0429 12:24:54.987525   90765 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 12:24:55.092946   90765 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714393495.056626516
	
	I0429 12:24:55.092975   90765 fix.go:216] guest clock: 1714393495.056626516
	I0429 12:24:55.092985   90765 fix.go:229] Guest: 2024-04-29 12:24:55.056626516 +0000 UTC Remote: 2024-04-29 12:24:54.983964133 +0000 UTC m=+27.706886366 (delta=72.662383ms)
	I0429 12:24:55.093037   90765 fix.go:200] guest clock delta is within tolerance: 72.662383ms
	I0429 12:24:55.093042   90765 start.go:83] releasing machines lock for "addons-051772", held for 27.703614073s
	I0429 12:24:55.093095   90765 main.go:141] libmachine: (addons-051772) Calling .DriverName
	I0429 12:24:55.093383   90765 main.go:141] libmachine: (addons-051772) Calling .GetIP
	I0429 12:24:55.096117   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:55.096432   90765 main.go:141] libmachine: (addons-051772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:60:43", ip: ""} in network mk-addons-051772: {Iface:virbr1 ExpiryTime:2024-04-29 13:24:43 +0000 UTC Type:0 Mac:52:54:00:af:60:43 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:addons-051772 Clientid:01:52:54:00:af:60:43}
	I0429 12:24:55.096466   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined IP address 192.168.39.38 and MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:55.096654   90765 main.go:141] libmachine: (addons-051772) Calling .DriverName
	I0429 12:24:55.097211   90765 main.go:141] libmachine: (addons-051772) Calling .DriverName
	I0429 12:24:55.097419   90765 main.go:141] libmachine: (addons-051772) Calling .DriverName
	I0429 12:24:55.097510   90765 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 12:24:55.097556   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHHostname
	I0429 12:24:55.097817   90765 ssh_runner.go:195] Run: cat /version.json
	I0429 12:24:55.097842   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHHostname
	I0429 12:24:55.100324   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:55.100704   90765 main.go:141] libmachine: (addons-051772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:60:43", ip: ""} in network mk-addons-051772: {Iface:virbr1 ExpiryTime:2024-04-29 13:24:43 +0000 UTC Type:0 Mac:52:54:00:af:60:43 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:addons-051772 Clientid:01:52:54:00:af:60:43}
	I0429 12:24:55.100729   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined IP address 192.168.39.38 and MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:55.100756   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:55.100857   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHPort
	I0429 12:24:55.101036   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHKeyPath
	I0429 12:24:55.101194   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHUsername
	I0429 12:24:55.101243   90765 main.go:141] libmachine: (addons-051772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:60:43", ip: ""} in network mk-addons-051772: {Iface:virbr1 ExpiryTime:2024-04-29 13:24:43 +0000 UTC Type:0 Mac:52:54:00:af:60:43 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:addons-051772 Clientid:01:52:54:00:af:60:43}
	I0429 12:24:55.101270   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined IP address 192.168.39.38 and MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:55.101367   90765 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18771-82690/.minikube/machines/addons-051772/id_rsa Username:docker}
	I0429 12:24:55.101403   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHPort
	I0429 12:24:55.101554   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHKeyPath
	I0429 12:24:55.101683   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHUsername
	I0429 12:24:55.101805   90765 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18771-82690/.minikube/machines/addons-051772/id_rsa Username:docker}
	I0429 12:24:55.199444   90765 ssh_runner.go:195] Run: systemctl --version
	I0429 12:24:55.205952   90765 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 12:24:55.212371   90765 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 12:24:55.212431   90765 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 12:24:55.230170   90765 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 12:24:55.230197   90765 start.go:494] detecting cgroup driver to use...
	I0429 12:24:55.230274   90765 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 12:24:55.264541   90765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 12:24:55.279021   90765 docker.go:217] disabling cri-docker service (if available) ...
	I0429 12:24:55.279073   90765 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 12:24:55.293532   90765 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 12:24:55.308126   90765 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 12:24:55.427531   90765 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 12:24:55.613282   90765 docker.go:233] disabling docker service ...
	I0429 12:24:55.613355   90765 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 12:24:55.637579   90765 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 12:24:55.652546   90765 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 12:24:55.769335   90765 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 12:24:55.895146   90765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 12:24:55.911085   90765 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:24:55.932311   90765 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 12:24:55.945444   90765 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 12:24:55.958706   90765 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 12:24:55.958768   90765 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 12:24:55.971223   90765 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 12:24:55.984016   90765 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 12:24:55.996140   90765 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 12:24:56.008379   90765 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 12:24:56.021213   90765 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 12:24:56.033864   90765 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 12:24:56.046449   90765 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 12:24:56.060194   90765 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 12:24:56.071816   90765 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 12:24:56.071884   90765 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 12:24:56.087786   90765 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 12:24:56.099393   90765 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:24:56.222944   90765 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 12:24:56.255431   90765 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0429 12:24:56.255517   90765 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0429 12:24:56.260887   90765 retry.go:31] will retry after 1.48828714s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0429 12:24:57.749844   90765 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0429 12:24:57.756339   90765 start.go:562] Will wait 60s for crictl version
	I0429 12:24:57.756428   90765 ssh_runner.go:195] Run: which crictl
	I0429 12:24:57.760957   90765 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 12:24:57.798743   90765 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.15
	RuntimeApiVersion:  v1
	I0429 12:24:57.798826   90765 ssh_runner.go:195] Run: containerd --version
	I0429 12:24:57.829332   90765 ssh_runner.go:195] Run: containerd --version
	I0429 12:24:57.857450   90765 out.go:177] * Preparing Kubernetes v1.30.0 on containerd 1.7.15 ...
	I0429 12:24:57.858890   90765 main.go:141] libmachine: (addons-051772) Calling .GetIP
	I0429 12:24:57.861502   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:57.861822   90765 main.go:141] libmachine: (addons-051772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:60:43", ip: ""} in network mk-addons-051772: {Iface:virbr1 ExpiryTime:2024-04-29 13:24:43 +0000 UTC Type:0 Mac:52:54:00:af:60:43 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:addons-051772 Clientid:01:52:54:00:af:60:43}
	I0429 12:24:57.861845   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined IP address 192.168.39.38 and MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:24:57.862054   90765 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 12:24:57.866594   90765 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 12:24:57.880427   90765 kubeadm.go:877] updating cluster {Name:addons-051772 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:addons-051772 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 12:24:57.880577   90765 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime containerd
	I0429 12:24:57.880652   90765 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 12:24:57.913739   90765 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0429 12:24:57.913825   90765 ssh_runner.go:195] Run: which lz4
	I0429 12:24:57.918357   90765 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 12:24:57.923207   90765 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 12:24:57.923240   90765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-82690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (393937158 bytes)
	I0429 12:24:59.405938   90765 containerd.go:563] duration metric: took 1.487606728s to copy over tarball
	I0429 12:24:59.406020   90765 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 12:25:01.888229   90765 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.482168086s)
	I0429 12:25:01.888262   90765 containerd.go:570] duration metric: took 2.482294888s to extract the tarball
	I0429 12:25:01.888270   90765 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 12:25:01.927188   90765 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:25:02.051181   90765 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 12:25:02.094821   90765 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 12:25:02.145066   90765 retry.go:31] will retry after 147.677823ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-29T12:25:02Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0429 12:25:02.293489   90765 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 12:25:02.334090   90765 containerd.go:627] all images are preloaded for containerd runtime.
	I0429 12:25:02.459850   90765 cache_images.go:84] Images are preloaded, skipping loading
	I0429 12:25:02.459879   90765 kubeadm.go:928] updating node { 192.168.39.38 8443 v1.30.0 containerd true true} ...
	I0429 12:25:02.460068   90765 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-051772 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-051772 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 12:25:02.460149   90765 ssh_runner.go:195] Run: sudo crictl info
	I0429 12:25:02.499205   90765 cni.go:84] Creating CNI manager for ""
	I0429 12:25:02.499234   90765 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0429 12:25:02.499246   90765 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 12:25:02.499267   90765 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.38 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-051772 NodeName:addons-051772 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 12:25:02.499392   90765 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.38
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-051772"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 12:25:02.499462   90765 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 12:25:02.510741   90765 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 12:25:02.510825   90765 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 12:25:02.521689   90765 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0429 12:25:02.542013   90765 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 12:25:02.561256   90765 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2170 bytes)
	I0429 12:25:02.580788   90765 ssh_runner.go:195] Run: grep 192.168.39.38	control-plane.minikube.internal$ /etc/hosts
	I0429 12:25:02.585312   90765 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 12:25:02.600203   90765 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:25:02.726749   90765 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 12:25:02.752149   90765 certs.go:68] Setting up /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772 for IP: 192.168.39.38
	I0429 12:25:02.752191   90765 certs.go:194] generating shared ca certs ...
	I0429 12:25:02.752214   90765 certs.go:226] acquiring lock for ca certs: {Name:mkf391f49ef0e86f8fa52c0bc7ca727ac88d212a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:02.752366   90765 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18771-82690/.minikube/ca.key
	I0429 12:25:02.974906   90765 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18771-82690/.minikube/ca.crt ...
	I0429 12:25:02.974936   90765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-82690/.minikube/ca.crt: {Name:mkc13cdb29a17a48989155a6479d8c07895b67e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:02.975102   90765 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18771-82690/.minikube/ca.key ...
	I0429 12:25:02.975113   90765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-82690/.minikube/ca.key: {Name:mk7b77206ada8b8ec2d61c9ba8bf6b740241811e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:02.975185   90765 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18771-82690/.minikube/proxy-client-ca.key
	I0429 12:25:03.070817   90765 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18771-82690/.minikube/proxy-client-ca.crt ...
	I0429 12:25:03.070854   90765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-82690/.minikube/proxy-client-ca.crt: {Name:mk513077cea130e4ddf39d34ed8f98e920de9f64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:03.071053   90765 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18771-82690/.minikube/proxy-client-ca.key ...
	I0429 12:25:03.071066   90765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-82690/.minikube/proxy-client-ca.key: {Name:mkd653b209771ef3c01ea5ef7effbcbdc40eccdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:03.071159   90765 certs.go:256] generating profile certs ...
	I0429 12:25:03.071219   90765 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.key
	I0429 12:25:03.071233   90765 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt with IP's: []
	I0429 12:25:03.258198   90765 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt ...
	I0429 12:25:03.258233   90765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt: {Name:mk8c87dc058fcf3d2704084ede209b7ef11efb77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:03.258423   90765 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.key ...
	I0429 12:25:03.258438   90765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.key: {Name:mkd3c71126347871772ce5158ab64830ddebf2b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:03.258542   90765 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/apiserver.key.06d90717
	I0429 12:25:03.258564   90765 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/apiserver.crt.06d90717 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.38]
	I0429 12:25:03.391079   90765 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/apiserver.crt.06d90717 ...
	I0429 12:25:03.391112   90765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/apiserver.crt.06d90717: {Name:mk5383bea080dbab6904861d4ce6c1a348f269bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:03.391288   90765 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/apiserver.key.06d90717 ...
	I0429 12:25:03.391309   90765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/apiserver.key.06d90717: {Name:mkf9cd0862a1645055b558e2c3c2d6fe99560378 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:03.391407   90765 certs.go:381] copying /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/apiserver.crt.06d90717 -> /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/apiserver.crt
	I0429 12:25:03.391511   90765 certs.go:385] copying /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/apiserver.key.06d90717 -> /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/apiserver.key
	I0429 12:25:03.391564   90765 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/proxy-client.key
	I0429 12:25:03.391582   90765 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/proxy-client.crt with IP's: []
	I0429 12:25:03.522041   90765 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/proxy-client.crt ...
	I0429 12:25:03.522074   90765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/proxy-client.crt: {Name:mkb4034c8f9c0c966cbacd1e6713d5665d819813 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:03.522248   90765 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/proxy-client.key ...
	I0429 12:25:03.522264   90765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/proxy-client.key: {Name:mk2e30c7477d6f8b5c9d5f8283bd2859b57a4e0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:03.522449   90765 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-82690/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 12:25:03.522489   90765 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-82690/.minikube/certs/ca.pem (1082 bytes)
	I0429 12:25:03.522513   90765 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-82690/.minikube/certs/cert.pem (1123 bytes)
	I0429 12:25:03.522545   90765 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-82690/.minikube/certs/key.pem (1679 bytes)
	I0429 12:25:03.523195   90765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-82690/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 12:25:03.555396   90765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-82690/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 12:25:03.582489   90765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-82690/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 12:25:03.608585   90765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-82690/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 12:25:03.634426   90765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0429 12:25:03.660223   90765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 12:25:03.686377   90765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 12:25:03.712332   90765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 12:25:03.737449   90765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-82690/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 12:25:03.763524   90765 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 12:25:03.781497   90765 ssh_runner.go:195] Run: openssl version
	I0429 12:25:03.787560   90765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 12:25:03.798955   90765 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:25:03.803845   90765 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 12:25 /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:25:03.803905   90765 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:25:03.809754   90765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 12:25:03.821099   90765 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 12:25:03.825712   90765 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 12:25:03.825764   90765 kubeadm.go:391] StartCluster: {Name:addons-051772 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 C
lusterName:addons-051772 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:25:03.825838   90765 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0429 12:25:03.825897   90765 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 12:25:03.866809   90765 cri.go:89] found id: ""
	I0429 12:25:03.866899   90765 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 12:25:03.879388   90765 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 12:25:03.889568   90765 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 12:25:03.899391   90765 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 12:25:03.899415   90765 kubeadm.go:156] found existing configuration files:
	
	I0429 12:25:03.899453   90765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 12:25:03.908599   90765 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 12:25:03.908645   90765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 12:25:03.918198   90765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 12:25:03.927508   90765 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 12:25:03.927568   90765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 12:25:03.937020   90765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 12:25:03.946063   90765 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 12:25:03.946114   90765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 12:25:03.956623   90765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 12:25:03.965915   90765 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 12:25:03.965966   90765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 12:25:03.975854   90765 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 12:25:04.026305   90765 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 12:25:04.026367   90765 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 12:25:04.151196   90765 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 12:25:04.151357   90765 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 12:25:04.151514   90765 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 12:25:04.395816   90765 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 12:25:04.399326   90765 out.go:204]   - Generating certificates and keys ...
	I0429 12:25:04.399448   90765 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 12:25:04.399553   90765 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 12:25:04.744154   90765 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 12:25:05.154710   90765 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 12:25:05.323188   90765 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 12:25:05.579741   90765 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 12:25:05.967746   90765 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 12:25:05.967890   90765 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-051772 localhost] and IPs [192.168.39.38 127.0.0.1 ::1]
	I0429 12:25:06.253961   90765 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 12:25:06.254117   90765 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-051772 localhost] and IPs [192.168.39.38 127.0.0.1 ::1]
	I0429 12:25:06.431683   90765 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 12:25:06.555561   90765 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 12:25:06.879703   90765 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 12:25:06.879814   90765 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 12:25:06.987735   90765 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 12:25:07.226130   90765 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 12:25:07.471107   90765 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 12:25:07.518201   90765 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 12:25:07.595633   90765 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 12:25:07.596191   90765 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 12:25:07.600606   90765 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 12:25:07.602569   90765 out.go:204]   - Booting up control plane ...
	I0429 12:25:07.602663   90765 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 12:25:07.602769   90765 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 12:25:07.602886   90765 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 12:25:07.618699   90765 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 12:25:07.621134   90765 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 12:25:07.621452   90765 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 12:25:07.748667   90765 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 12:25:07.748819   90765 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 12:25:08.748738   90765 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.000971678s
	I0429 12:25:08.748845   90765 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 12:25:13.748161   90765 kubeadm.go:309] [api-check] The API server is healthy after 5.002398817s
	I0429 12:25:13.765048   90765 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 12:25:13.779283   90765 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 12:25:13.817460   90765 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 12:25:13.817632   90765 kubeadm.go:309] [mark-control-plane] Marking the node addons-051772 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 12:25:13.830267   90765 kubeadm.go:309] [bootstrap-token] Using token: 1cw8u9.62otafqspa1xni5f
	I0429 12:25:13.831689   90765 out.go:204]   - Configuring RBAC rules ...
	I0429 12:25:13.831848   90765 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 12:25:13.837617   90765 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 12:25:13.846225   90765 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 12:25:13.854422   90765 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 12:25:13.857954   90765 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 12:25:13.865519   90765 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 12:25:14.159089   90765 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 12:25:14.661771   90765 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 12:25:15.156533   90765 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 12:25:15.157452   90765 kubeadm.go:309] 
	I0429 12:25:15.157543   90765 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 12:25:15.157560   90765 kubeadm.go:309] 
	I0429 12:25:15.157678   90765 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 12:25:15.157699   90765 kubeadm.go:309] 
	I0429 12:25:15.157724   90765 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 12:25:15.157797   90765 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 12:25:15.157843   90765 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 12:25:15.157849   90765 kubeadm.go:309] 
	I0429 12:25:15.157932   90765 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 12:25:15.157948   90765 kubeadm.go:309] 
	I0429 12:25:15.158022   90765 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 12:25:15.158037   90765 kubeadm.go:309] 
	I0429 12:25:15.158109   90765 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 12:25:15.158217   90765 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 12:25:15.158325   90765 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 12:25:15.158338   90765 kubeadm.go:309] 
	I0429 12:25:15.158430   90765 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 12:25:15.158531   90765 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 12:25:15.158544   90765 kubeadm.go:309] 
	I0429 12:25:15.158668   90765 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 1cw8u9.62otafqspa1xni5f \
	I0429 12:25:15.158787   90765 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:07b6c9f6491ed0da6891f6be35cd8d2c852662ed2a0fdd9f3fcd29701a48bc85 \
	I0429 12:25:15.158824   90765 kubeadm.go:309] 	--control-plane 
	I0429 12:25:15.158835   90765 kubeadm.go:309] 
	I0429 12:25:15.158947   90765 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 12:25:15.158957   90765 kubeadm.go:309] 
	I0429 12:25:15.159053   90765 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 1cw8u9.62otafqspa1xni5f \
	I0429 12:25:15.159185   90765 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:07b6c9f6491ed0da6891f6be35cd8d2c852662ed2a0fdd9f3fcd29701a48bc85 
	I0429 12:25:15.159714   90765 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 12:25:15.159921   90765 cni.go:84] Creating CNI manager for ""
	I0429 12:25:15.159942   90765 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0429 12:25:15.161760   90765 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 12:25:15.163243   90765 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 12:25:15.178450   90765 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 12:25:15.202856   90765 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 12:25:15.202939   90765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:15.202973   90765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-051772 minikube.k8s.io/updated_at=2024_04_29T12_25_15_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=94a01df0d4b48636d4af3d06a53be687e06c0844 minikube.k8s.io/name=addons-051772 minikube.k8s.io/primary=true
	I0429 12:25:15.248671   90765 ops.go:34] apiserver oom_adj: -16
	I0429 12:25:15.361641   90765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:15.861968   90765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:16.361840   90765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:16.862569   90765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:17.361964   90765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:17.861736   90765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:18.362594   90765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:18.862080   90765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:19.361914   90765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:19.862429   90765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:20.361781   90765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:20.861722   90765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:21.362517   90765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:21.862248   90765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:22.362669   90765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:22.861936   90765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:23.362568   90765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:23.861774   90765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:24.362339   90765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:24.862271   90765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:25.362404   90765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:25.862486   90765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:26.361698   90765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:26.861683   90765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:27.362419   90765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:27.862717   90765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:27.960520   90765 kubeadm.go:1107] duration metric: took 12.757625343s to wait for elevateKubeSystemPrivileges
	W0429 12:25:27.960560   90765 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 12:25:27.960571   90765 kubeadm.go:393] duration metric: took 24.134810712s to StartCluster
	I0429 12:25:27.960596   90765 settings.go:142] acquiring lock: {Name:mk621a37adccd4e74925a8ead954bc985b3cae04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:27.960721   90765 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18771-82690/kubeconfig
	I0429 12:25:27.961128   90765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-82690/kubeconfig: {Name:mkee7e5279711dd24bf949f48e7e52043b732c39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:27.961323   90765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 12:25:27.961345   90765 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0429 12:25:27.961321   90765 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0429 12:25:27.961452   90765 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-051772"
	I0429 12:25:27.963259   90765 out.go:177] * Verifying Kubernetes components...
	I0429 12:25:27.961477   90765 addons.go:69] Setting default-storageclass=true in profile "addons-051772"
	I0429 12:25:27.961486   90765 addons.go:69] Setting yakd=true in profile "addons-051772"
	I0429 12:25:27.961504   90765 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-051772"
	I0429 12:25:27.961498   90765 addons.go:69] Setting metrics-server=true in profile "addons-051772"
	I0429 12:25:27.961508   90765 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-051772"
	I0429 12:25:27.961511   90765 addons.go:69] Setting gcp-auth=true in profile "addons-051772"
	I0429 12:25:27.961513   90765 addons.go:69] Setting registry=true in profile "addons-051772"
	I0429 12:25:27.961507   90765 addons.go:69] Setting cloud-spanner=true in profile "addons-051772"
	I0429 12:25:27.961517   90765 addons.go:69] Setting ingress-dns=true in profile "addons-051772"
	I0429 12:25:27.961516   90765 addons.go:69] Setting ingress=true in profile "addons-051772"
	I0429 12:25:27.961521   90765 addons.go:69] Setting inspektor-gadget=true in profile "addons-051772"
	I0429 12:25:27.961523   90765 addons.go:69] Setting storage-provisioner=true in profile "addons-051772"
	I0429 12:25:27.961522   90765 addons.go:69] Setting helm-tiller=true in profile "addons-051772"
	I0429 12:25:27.961532   90765 addons.go:69] Setting volumesnapshots=true in profile "addons-051772"
	I0429 12:25:27.961544   90765 config.go:182] Loaded profile config "addons-051772": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0429 12:25:27.961545   90765 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-051772"
	I0429 12:25:27.963340   90765 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-051772"
	I0429 12:25:27.963369   90765 addons.go:234] Setting addon registry=true in "addons-051772"
	I0429 12:25:27.963398   90765 addons.go:234] Setting addon yakd=true in "addons-051772"
	I0429 12:25:27.963398   90765 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-051772"
	I0429 12:25:27.963417   90765 host.go:66] Checking if "addons-051772" exists ...
	I0429 12:25:27.963414   90765 addons.go:234] Setting addon inspektor-gadget=true in "addons-051772"
	I0429 12:25:27.963434   90765 host.go:66] Checking if "addons-051772" exists ...
	I0429 12:25:27.963456   90765 host.go:66] Checking if "addons-051772" exists ...
	I0429 12:25:27.963465   90765 host.go:66] Checking if "addons-051772" exists ...
	I0429 12:25:27.963871   90765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:25:27.963887   90765 addons.go:234] Setting addon metrics-server=true in "addons-051772"
	I0429 12:25:27.963889   90765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:25:27.963893   90765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:25:27.963899   90765 addons.go:234] Setting addon ingress-dns=true in "addons-051772"
	I0429 12:25:27.963901   90765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:25:27.963911   90765 host.go:66] Checking if "addons-051772" exists ...
	I0429 12:25:27.963912   90765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:25:27.963920   90765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:25:27.963924   90765 mustload.go:65] Loading cluster: addons-051772
	I0429 12:25:27.963929   90765 host.go:66] Checking if "addons-051772" exists ...
	I0429 12:25:27.963946   90765 addons.go:234] Setting addon helm-tiller=true in "addons-051772"
	I0429 12:25:27.963977   90765 addons.go:234] Setting addon ingress=true in "addons-051772"
	I0429 12:25:27.964018   90765 host.go:66] Checking if "addons-051772" exists ...
	I0429 12:25:27.964038   90765 addons.go:234] Setting addon volumesnapshots=true in "addons-051772"
	I0429 12:25:27.964057   90765 host.go:66] Checking if "addons-051772" exists ...
	I0429 12:25:27.964091   90765 config.go:182] Loaded profile config "addons-051772": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0429 12:25:27.964264   90765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:25:27.964287   90765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:25:27.963379   90765 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-051772"
	I0429 12:25:27.964317   90765 host.go:66] Checking if "addons-051772" exists ...
	I0429 12:25:27.964364   90765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:25:27.964382   90765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:25:27.963904   90765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:25:27.964400   90765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:25:27.964415   90765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:25:27.963882   90765 addons.go:234] Setting addon cloud-spanner=true in "addons-051772"
	I0429 12:25:27.964019   90765 host.go:66] Checking if "addons-051772" exists ...
	I0429 12:25:27.964435   90765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:25:27.963880   90765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:25:27.964447   90765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:25:27.964480   90765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:25:27.963912   90765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:25:27.963887   90765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:25:27.964558   90765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:25:27.963929   90765 addons.go:234] Setting addon storage-provisioner=true in "addons-051772"
	I0429 12:25:27.966443   90765 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:25:27.964266   90765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:25:27.966500   90765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:25:27.964809   90765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:25:27.966543   90765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:25:27.964829   90765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:25:27.966577   90765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:25:27.964851   90765 host.go:66] Checking if "addons-051772" exists ...
	I0429 12:25:27.964887   90765 host.go:66] Checking if "addons-051772" exists ...
	I0429 12:25:27.984878   90765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46503
	I0429 12:25:27.985343   90765 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:27.985892   90765 main.go:141] libmachine: Using API Version  1
	I0429 12:25:27.985915   90765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:27.986250   90765 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:27.986821   90765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:25:27.986860   90765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:25:27.993865   90765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39319
	I0429 12:25:27.994341   90765 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:27.994855   90765 main.go:141] libmachine: Using API Version  1
	I0429 12:25:27.994884   90765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:27.995241   90765 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:27.995796   90765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:25:27.995844   90765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:25:27.997281   90765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36435
	I0429 12:25:27.997599   90765 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:27.997702   90765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35279
	I0429 12:25:27.998046   90765 main.go:141] libmachine: Using API Version  1
	I0429 12:25:27.998069   90765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:27.998415   90765 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:27.999008   90765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:25:27.999048   90765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:25:27.999942   90765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36299
	I0429 12:25:28.000077   90765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:25:28.000114   90765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:25:28.000347   90765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:25:28.000377   90765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:25:28.000903   90765 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:28.000985   90765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41197
	I0429 12:25:28.001618   90765 main.go:141] libmachine: Using API Version  1
	I0429 12:25:28.001637   90765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:28.001700   90765 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:28.002243   90765 main.go:141] libmachine: Using API Version  1
	I0429 12:25:28.002298   90765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:28.002433   90765 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:28.002542   90765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38647
	I0429 12:25:28.002934   90765 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:28.003156   90765 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:28.004078   90765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:25:28.004142   90765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:25:28.004715   90765 main.go:141] libmachine: Using API Version  1
	I0429 12:25:28.004746   90765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:28.004893   90765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38427
	I0429 12:25:28.005458   90765 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:28.005596   90765 main.go:141] libmachine: (addons-051772) Calling .GetState
	I0429 12:25:28.006868   90765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44713
	I0429 12:25:28.007388   90765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:25:28.007420   90765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:25:28.008265   90765 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:28.008804   90765 main.go:141] libmachine: Using API Version  1
	I0429 12:25:28.008823   90765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:28.009206   90765 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:28.009821   90765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:25:28.009860   90765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:25:28.010065   90765 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:28.010551   90765 host.go:66] Checking if "addons-051772" exists ...
	I0429 12:25:28.010910   90765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:25:28.010949   90765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:25:28.011546   90765 main.go:141] libmachine: Using API Version  1
	I0429 12:25:28.011568   90765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:28.011989   90765 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:28.012116   90765 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:28.012209   90765 main.go:141] libmachine: (addons-051772) Calling .GetState
	I0429 12:25:28.012694   90765 main.go:141] libmachine: Using API Version  1
	I0429 12:25:28.012711   90765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:28.016689   90765 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-051772"
	I0429 12:25:28.016746   90765 host.go:66] Checking if "addons-051772" exists ...
	I0429 12:25:28.017120   90765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:25:28.017153   90765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:25:28.019354   90765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37487
	I0429 12:25:28.019868   90765 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:28.020058   90765 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:28.020630   90765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:25:28.020659   90765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:25:28.021110   90765 main.go:141] libmachine: Using API Version  1
	I0429 12:25:28.021137   90765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:28.021490   90765 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:28.021835   90765 main.go:141] libmachine: (addons-051772) Calling .GetState
	I0429 12:25:28.024447   90765 addons.go:234] Setting addon default-storageclass=true in "addons-051772"
	I0429 12:25:28.024489   90765 host.go:66] Checking if "addons-051772" exists ...
	I0429 12:25:28.024833   90765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:25:28.024851   90765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:25:28.026992   90765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44183
	I0429 12:25:28.027450   90765 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:28.028027   90765 main.go:141] libmachine: Using API Version  1
	I0429 12:25:28.028045   90765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:28.028423   90765 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:28.028476   90765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37641
	I0429 12:25:28.028760   90765 main.go:141] libmachine: (addons-051772) Calling .GetState
	I0429 12:25:28.029286   90765 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:28.029750   90765 main.go:141] libmachine: Using API Version  1
	I0429 12:25:28.029769   90765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:28.030168   90765 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:28.030757   90765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:25:28.030785   90765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:25:28.030953   90765 main.go:141] libmachine: (addons-051772) Calling .DriverName
	I0429 12:25:28.033587   90765 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0429 12:25:28.034955   90765 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0429 12:25:28.036363   90765 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0429 12:25:28.037950   90765 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0429 12:25:28.037972   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0429 12:25:28.038005   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHHostname
	I0429 12:25:28.041335   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:25:28.041729   90765 main.go:141] libmachine: (addons-051772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:60:43", ip: ""} in network mk-addons-051772: {Iface:virbr1 ExpiryTime:2024-04-29 13:24:43 +0000 UTC Type:0 Mac:52:54:00:af:60:43 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:addons-051772 Clientid:01:52:54:00:af:60:43}
	I0429 12:25:28.041749   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined IP address 192.168.39.38 and MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:25:28.041992   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHPort
	I0429 12:25:28.042073   90765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44359
	I0429 12:25:28.042375   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHKeyPath
	I0429 12:25:28.042533   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHUsername
	I0429 12:25:28.042679   90765 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18771-82690/.minikube/machines/addons-051772/id_rsa Username:docker}
	I0429 12:25:28.046039   90765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44241
	I0429 12:25:28.046543   90765 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:28.046972   90765 main.go:141] libmachine: Using API Version  1
	I0429 12:25:28.046988   90765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:28.047353   90765 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:28.047536   90765 main.go:141] libmachine: (addons-051772) Calling .GetState
	I0429 12:25:28.048251   90765 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:28.048933   90765 main.go:141] libmachine: Using API Version  1
	I0429 12:25:28.048957   90765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:28.049254   90765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39271
	I0429 12:25:28.049420   90765 main.go:141] libmachine: (addons-051772) Calling .DriverName
	I0429 12:25:28.049480   90765 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:28.051247   90765 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0429 12:25:28.049781   90765 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:28.050193   90765 main.go:141] libmachine: (addons-051772) Calling .GetState
	I0429 12:25:28.052533   90765 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0429 12:25:28.052549   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0429 12:25:28.052569   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHHostname
	I0429 12:25:28.053288   90765 main.go:141] libmachine: Using API Version  1
	I0429 12:25:28.053306   90765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:28.056348   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:25:28.057101   90765 main.go:141] libmachine: (addons-051772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:60:43", ip: ""} in network mk-addons-051772: {Iface:virbr1 ExpiryTime:2024-04-29 13:24:43 +0000 UTC Type:0 Mac:52:54:00:af:60:43 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:addons-051772 Clientid:01:52:54:00:af:60:43}
	I0429 12:25:28.057130   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined IP address 192.168.39.38 and MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:25:28.057471   90765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41427
	I0429 12:25:28.057599   90765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37911
	I0429 12:25:28.057673   90765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33941
	I0429 12:25:28.057993   90765 main.go:141] libmachine: (addons-051772) Calling .DriverName
	I0429 12:25:28.058070   90765 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:28.058363   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHPort
	I0429 12:25:28.059994   90765 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0429 12:25:28.058679   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHKeyPath
	I0429 12:25:28.058709   90765 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:28.058708   90765 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:28.058790   90765 main.go:141] libmachine: Using API Version  1
	I0429 12:25:28.059003   90765 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:28.059471   90765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34899
	I0429 12:25:28.061134   90765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35869
	I0429 12:25:28.061613   90765 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0429 12:25:28.061631   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0429 12:25:28.061650   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHHostname
	I0429 12:25:28.061691   90765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:28.062371   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHUsername
	I0429 12:25:28.062482   90765 main.go:141] libmachine: Using API Version  1
	I0429 12:25:28.062491   90765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:28.062544   90765 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:28.062599   90765 main.go:141] libmachine: Using API Version  1
	I0429 12:25:28.062617   90765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:28.062625   90765 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:28.062846   90765 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:28.063267   90765 main.go:141] libmachine: Using API Version  1
	I0429 12:25:28.063286   90765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:28.063345   90765 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18771-82690/.minikube/machines/addons-051772/id_rsa Username:docker}
	I0429 12:25:28.063477   90765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:25:28.063511   90765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:25:28.063617   90765 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:28.063685   90765 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:28.063740   90765 main.go:141] libmachine: (addons-051772) Calling .GetState
	I0429 12:25:28.063909   90765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:25:28.063942   90765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:25:28.064074   90765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:25:28.064098   90765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:25:28.064580   90765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:25:28.064607   90765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:25:28.067837   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHPort
	I0429 12:25:28.067915   90765 main.go:141] libmachine: (addons-051772) Calling .DriverName
	I0429 12:25:28.067983   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:25:28.067994   90765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40277
	I0429 12:25:28.068016   90765 main.go:141] libmachine: (addons-051772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:60:43", ip: ""} in network mk-addons-051772: {Iface:virbr1 ExpiryTime:2024-04-29 13:24:43 +0000 UTC Type:0 Mac:52:54:00:af:60:43 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:addons-051772 Clientid:01:52:54:00:af:60:43}
	I0429 12:25:28.068020   90765 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:28.068033   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined IP address 192.168.39.38 and MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:25:28.068044   90765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36637
	I0429 12:25:28.068468   90765 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:28.068633   90765 main.go:141] libmachine: Using API Version  1
	I0429 12:25:28.068645   90765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:28.070173   90765 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0429 12:25:28.068875   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHKeyPath
	I0429 12:25:28.068950   90765 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:28.069173   90765 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:28.069431   90765 main.go:141] libmachine: Using API Version  1
	I0429 12:25:28.071595   90765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:28.071720   90765 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0429 12:25:28.071733   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0429 12:25:28.071752   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHHostname
	I0429 12:25:28.071960   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHUsername
	I0429 12:25:28.072028   90765 main.go:141] libmachine: (addons-051772) Calling .GetState
	I0429 12:25:28.072492   90765 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:28.072632   90765 main.go:141] libmachine: Using API Version  1
	I0429 12:25:28.072646   90765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:28.072753   90765 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18771-82690/.minikube/machines/addons-051772/id_rsa Username:docker}
	I0429 12:25:28.072942   90765 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:28.073471   90765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:25:28.073509   90765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:25:28.076462   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:25:28.076946   90765 main.go:141] libmachine: (addons-051772) Calling .DriverName
	I0429 12:25:28.078810   90765 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0429 12:25:28.077418   90765 main.go:141] libmachine: (addons-051772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:60:43", ip: ""} in network mk-addons-051772: {Iface:virbr1 ExpiryTime:2024-04-29 13:24:43 +0000 UTC Type:0 Mac:52:54:00:af:60:43 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:addons-051772 Clientid:01:52:54:00:af:60:43}
	I0429 12:25:28.077602   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHPort
	I0429 12:25:28.078314   90765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:25:28.080229   90765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:25:28.080302   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined IP address 192.168.39.38 and MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:25:28.081678   90765 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0429 12:25:28.080593   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHKeyPath
	I0429 12:25:28.084145   90765 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0429 12:25:28.083152   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHUsername
	I0429 12:25:28.086720   90765 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0429 12:25:28.085585   90765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45147
	I0429 12:25:28.085769   90765 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18771-82690/.minikube/machines/addons-051772/id_rsa Username:docker}
	I0429 12:25:28.085807   90765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44903
	I0429 12:25:28.086793   90765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33271
	I0429 12:25:28.087468   90765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45767
	I0429 12:25:28.089198   90765 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0429 12:25:28.087836   90765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44901
	I0429 12:25:28.088667   90765 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:28.088783   90765 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:28.089019   90765 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:28.089276   90765 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:28.089874   90765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42229
	I0429 12:25:28.090357   90765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45273
	I0429 12:25:28.091733   90765 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0429 12:25:28.091085   90765 main.go:141] libmachine: Using API Version  1
	I0429 12:25:28.091145   90765 main.go:141] libmachine: Using API Version  1
	I0429 12:25:28.091479   90765 main.go:141] libmachine: Using API Version  1
	I0429 12:25:28.091519   90765 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:28.091929   90765 main.go:141] libmachine: Using API Version  1
	I0429 12:25:28.091992   90765 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:28.093292   90765 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0429 12:25:28.093388   90765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:28.093407   90765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:28.093417   90765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:28.093477   90765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:28.093783   90765 main.go:141] libmachine: Using API Version  1
	I0429 12:25:28.093924   90765 main.go:141] libmachine: Using API Version  1
	I0429 12:25:28.094921   90765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:28.094972   90765 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0429 12:25:28.096338   90765 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:28.096376   90765 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0429 12:25:28.096396   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0429 12:25:28.096417   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHHostname
	I0429 12:25:28.095123   90765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:28.095506   90765 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:28.095506   90765 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:28.095528   90765 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:28.095570   90765 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:28.095881   90765 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:28.097283   90765 main.go:141] libmachine: (addons-051772) Calling .GetState
	I0429 12:25:28.097288   90765 main.go:141] libmachine: (addons-051772) Calling .GetState
	I0429 12:25:28.097292   90765 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:28.097329   90765 main.go:141] libmachine: (addons-051772) Calling .GetState
	I0429 12:25:28.097360   90765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36901
	I0429 12:25:28.097429   90765 main.go:141] libmachine: Using API Version  1
	I0429 12:25:28.097443   90765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:28.097545   90765 main.go:141] libmachine: (addons-051772) Calling .GetState
	I0429 12:25:28.097821   90765 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:28.097932   90765 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:28.098031   90765 main.go:141] libmachine: (addons-051772) Calling .DriverName
	I0429 12:25:28.098083   90765 main.go:141] libmachine: (addons-051772) Calling .GetState
	I0429 12:25:28.098188   90765 main.go:141] libmachine: (addons-051772) Calling .GetState
	I0429 12:25:28.099159   90765 main.go:141] libmachine: Using API Version  1
	I0429 12:25:28.099176   90765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:28.099605   90765 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:28.099945   90765 main.go:141] libmachine: (addons-051772) Calling .GetState
	I0429 12:25:28.100603   90765 main.go:141] libmachine: (addons-051772) Calling .DriverName
	I0429 12:25:28.102437   90765 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 12:25:28.103822   90765 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 12:25:28.103841   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 12:25:28.103861   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHHostname
	I0429 12:25:28.102763   90765 main.go:141] libmachine: (addons-051772) Calling .DriverName
	I0429 12:25:28.101634   90765 main.go:141] libmachine: (addons-051772) Calling .DriverName
	I0429 12:25:28.101761   90765 main.go:141] libmachine: (addons-051772) Calling .DriverName
	I0429 12:25:28.102148   90765 main.go:141] libmachine: (addons-051772) Calling .DriverName
	I0429 12:25:28.101377   90765 main.go:141] libmachine: (addons-051772) Calling .DriverName
	I0429 12:25:28.103420   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:25:28.104052   90765 main.go:141] libmachine: (addons-051772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:60:43", ip: ""} in network mk-addons-051772: {Iface:virbr1 ExpiryTime:2024-04-29 13:24:43 +0000 UTC Type:0 Mac:52:54:00:af:60:43 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:addons-051772 Clientid:01:52:54:00:af:60:43}
	I0429 12:25:28.104075   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined IP address 192.168.39.38 and MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:25:28.106526   90765 out.go:177]   - Using image docker.io/registry:2.8.3
	I0429 12:25:28.105333   90765 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 12:25:28.105356   90765 main.go:141] libmachine: (addons-051772) Calling .DriverName
	I0429 12:25:28.105394   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHPort
	I0429 12:25:28.107824   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:25:28.108190   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 12:25:28.108200   90765 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0429 12:25:28.108382   90765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40133
	I0429 12:25:28.108877   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHPort
	I0429 12:25:28.109609   90765 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0429 12:25:28.109634   90765 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0429 12:25:28.109707   90765 main.go:141] libmachine: (addons-051772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:60:43", ip: ""} in network mk-addons-051772: {Iface:virbr1 ExpiryTime:2024-04-29 13:24:43 +0000 UTC Type:0 Mac:52:54:00:af:60:43 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:addons-051772 Clientid:01:52:54:00:af:60:43}
	I0429 12:25:28.109726   90765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44683
	I0429 12:25:28.110072   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHKeyPath
	I0429 12:25:28.111368   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHHostname
	I0429 12:25:28.111377   90765 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0429 12:25:28.111834   90765 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:28.113066   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined IP address 192.168.39.38 and MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:25:28.113080   90765 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0429 12:25:28.113325   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHUsername
	I0429 12:25:28.113335   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHKeyPath
	I0429 12:25:28.113550   90765 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:28.115351   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:25:28.115819   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHPort
	I0429 12:25:28.115987   90765 main.go:141] libmachine: Using API Version  1
	I0429 12:25:28.116302   90765 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0429 12:25:28.116310   90765 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0429 12:25:28.116387   90765 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0429 12:25:28.116476   90765 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18771-82690/.minikube/machines/addons-051772/id_rsa Username:docker}
	I0429 12:25:28.116632   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHUsername
	I0429 12:25:28.117077   90765 main.go:141] libmachine: Using API Version  1
	I0429 12:25:28.117553   90765 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0429 12:25:28.117569   90765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:28.117569   90765 main.go:141] libmachine: (addons-051772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:60:43", ip: ""} in network mk-addons-051772: {Iface:virbr1 ExpiryTime:2024-04-29 13:24:43 +0000 UTC Type:0 Mac:52:54:00:af:60:43 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:addons-051772 Clientid:01:52:54:00:af:60:43}
	I0429 12:25:28.117574   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0429 12:25:28.117595   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHHostname
	I0429 12:25:28.117596   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined IP address 192.168.39.38 and MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:25:28.117612   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0429 12:25:28.117616   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0429 12:25:28.117625   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHHostname
	I0429 12:25:28.117628   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHHostname
	I0429 12:25:28.117648   90765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:28.117657   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0429 12:25:28.117668   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHHostname
	I0429 12:25:28.117508   90765 out.go:177]   - Using image docker.io/busybox:stable
	I0429 12:25:28.118973   90765 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18771-82690/.minikube/machines/addons-051772/id_rsa Username:docker}
	I0429 12:25:28.119112   90765 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0429 12:25:28.119130   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0429 12:25:28.119147   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHHostname
	I0429 12:25:28.118809   90765 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:28.118832   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHKeyPath
	I0429 12:25:28.119438   90765 main.go:141] libmachine: (addons-051772) Calling .GetState
	I0429 12:25:28.119808   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHUsername
	I0429 12:25:28.119988   90765 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18771-82690/.minikube/machines/addons-051772/id_rsa Username:docker}
	I0429 12:25:28.121203   90765 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:28.121442   90765 main.go:141] libmachine: (addons-051772) Calling .GetState
	I0429 12:25:28.122342   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:25:28.123123   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:25:28.123214   90765 main.go:141] libmachine: (addons-051772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:60:43", ip: ""} in network mk-addons-051772: {Iface:virbr1 ExpiryTime:2024-04-29 13:24:43 +0000 UTC Type:0 Mac:52:54:00:af:60:43 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:addons-051772 Clientid:01:52:54:00:af:60:43}
	I0429 12:25:28.123243   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined IP address 192.168.39.38 and MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:25:28.123429   90765 main.go:141] libmachine: (addons-051772) Calling .DriverName
	I0429 12:25:28.123623   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHPort
	I0429 12:25:28.124999   90765 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0429 12:25:28.123773   90765 main.go:141] libmachine: (addons-051772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:60:43", ip: ""} in network mk-addons-051772: {Iface:virbr1 ExpiryTime:2024-04-29 13:24:43 +0000 UTC Type:0 Mac:52:54:00:af:60:43 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:addons-051772 Clientid:01:52:54:00:af:60:43}
	I0429 12:25:28.123900   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHPort
	I0429 12:25:28.123901   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHKeyPath
	I0429 12:25:28.124364   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:25:28.124823   90765 main.go:141] libmachine: (addons-051772) Calling .DriverName
	I0429 12:25:28.125495   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:25:28.126304   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHPort
	I0429 12:25:28.126071   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:25:28.126347   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined IP address 192.168.39.38 and MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:25:28.126379   90765 main.go:141] libmachine: (addons-051772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:60:43", ip: ""} in network mk-addons-051772: {Iface:virbr1 ExpiryTime:2024-04-29 13:24:43 +0000 UTC Type:0 Mac:52:54:00:af:60:43 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:addons-051772 Clientid:01:52:54:00:af:60:43}
	I0429 12:25:28.126397   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined IP address 192.168.39.38 and MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:25:28.126421   90765 main.go:141] libmachine: (addons-051772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:60:43", ip: ""} in network mk-addons-051772: {Iface:virbr1 ExpiryTime:2024-04-29 13:24:43 +0000 UTC Type:0 Mac:52:54:00:af:60:43 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:addons-051772 Clientid:01:52:54:00:af:60:43}
	I0429 12:25:28.126431   90765 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0429 12:25:28.126437   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined IP address 192.168.39.38 and MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:25:28.126441   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0429 12:25:28.126467   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHHostname
	I0429 12:25:28.126535   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHKeyPath
	I0429 12:25:28.126623   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHKeyPath
	I0429 12:25:28.128106   90765 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0429 12:25:28.126660   90765 main.go:141] libmachine: (addons-051772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:60:43", ip: ""} in network mk-addons-051772: {Iface:virbr1 ExpiryTime:2024-04-29 13:24:43 +0000 UTC Type:0 Mac:52:54:00:af:60:43 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:addons-051772 Clientid:01:52:54:00:af:60:43}
	I0429 12:25:28.126796   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHUsername
	I0429 12:25:28.126809   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHPort
	I0429 12:25:28.126815   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHUsername
	I0429 12:25:28.126831   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHUsername
	I0429 12:25:28.126914   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHPort
	I0429 12:25:28.129142   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:25:28.129401   90765 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0429 12:25:28.129415   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0429 12:25:28.129416   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined IP address 192.168.39.38 and MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:25:28.129431   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHHostname
	I0429 12:25:28.129547   90765 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18771-82690/.minikube/machines/addons-051772/id_rsa Username:docker}
	I0429 12:25:28.129570   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHKeyPath
	I0429 12:25:28.129623   90765 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18771-82690/.minikube/machines/addons-051772/id_rsa Username:docker}
	I0429 12:25:28.129770   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHKeyPath
	I0429 12:25:28.129818   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHUsername
	I0429 12:25:28.129824   90765 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18771-82690/.minikube/machines/addons-051772/id_rsa Username:docker}
	I0429 12:25:28.129896   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHUsername
	I0429 12:25:28.129936   90765 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18771-82690/.minikube/machines/addons-051772/id_rsa Username:docker}
	I0429 12:25:28.130011   90765 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18771-82690/.minikube/machines/addons-051772/id_rsa Username:docker}
	I0429 12:25:28.130444   90765 main.go:141] libmachine: (addons-051772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:60:43", ip: ""} in network mk-addons-051772: {Iface:virbr1 ExpiryTime:2024-04-29 13:24:43 +0000 UTC Type:0 Mac:52:54:00:af:60:43 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:addons-051772 Clientid:01:52:54:00:af:60:43}
	I0429 12:25:28.130460   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined IP address 192.168.39.38 and MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:25:28.130610   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHPort
	I0429 12:25:28.130754   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHKeyPath
	I0429 12:25:28.130937   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHUsername
	I0429 12:25:28.131069   90765 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18771-82690/.minikube/machines/addons-051772/id_rsa Username:docker}
	I0429 12:25:28.132065   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:25:28.132359   90765 main.go:141] libmachine: (addons-051772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:60:43", ip: ""} in network mk-addons-051772: {Iface:virbr1 ExpiryTime:2024-04-29 13:24:43 +0000 UTC Type:0 Mac:52:54:00:af:60:43 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:addons-051772 Clientid:01:52:54:00:af:60:43}
	I0429 12:25:28.132388   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined IP address 192.168.39.38 and MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:25:28.132551   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHPort
	I0429 12:25:28.132718   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHKeyPath
	I0429 12:25:28.132820   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHUsername
	I0429 12:25:28.132916   90765 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18771-82690/.minikube/machines/addons-051772/id_rsa Username:docker}
	W0429 12:25:28.135960   90765 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:40464->192.168.39.38:22: read: connection reset by peer
	I0429 12:25:28.135980   90765 retry.go:31] will retry after 335.202089ms: ssh: handshake failed: read tcp 192.168.39.1:40464->192.168.39.38:22: read: connection reset by peer
	W0429 12:25:28.136093   90765 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:40476->192.168.39.38:22: read: connection reset by peer
	I0429 12:25:28.136117   90765 retry.go:31] will retry after 334.754686ms: ssh: handshake failed: read tcp 192.168.39.1:40476->192.168.39.38:22: read: connection reset by peer
	I0429 12:25:28.432337   90765 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0429 12:25:28.432376   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0429 12:25:28.598040   90765 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0429 12:25:28.658766   90765 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0429 12:25:28.658792   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0429 12:25:28.822280   90765 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 12:25:28.822360   90765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 12:25:28.897429   90765 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0429 12:25:28.900364   90765 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0429 12:25:28.900385   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0429 12:25:28.926901   90765 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0429 12:25:28.926926   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0429 12:25:28.973170   90765 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0429 12:25:28.973197   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0429 12:25:29.021445   90765 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0429 12:25:29.021473   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0429 12:25:29.031450   90765 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 12:25:29.051529   90765 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0429 12:25:29.104947   90765 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 12:25:29.118620   90765 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0429 12:25:29.118642   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0429 12:25:29.138984   90765 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0429 12:25:29.293976   90765 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0429 12:25:29.294005   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0429 12:25:29.304767   90765 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0429 12:25:29.304790   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0429 12:25:29.321809   90765 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0429 12:25:29.321833   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0429 12:25:29.392823   90765 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0429 12:25:29.392848   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0429 12:25:29.418275   90765 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0429 12:25:29.418301   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0429 12:25:29.591948   90765 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0429 12:25:29.591974   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0429 12:25:29.803410   90765 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0429 12:25:29.803441   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0429 12:25:29.816242   90765 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0429 12:25:29.816266   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0429 12:25:29.834916   90765 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0429 12:25:29.854725   90765 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0429 12:25:29.854753   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0429 12:25:29.864891   90765 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0429 12:25:29.945470   90765 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0429 12:25:29.945496   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0429 12:25:30.039695   90765 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 12:25:30.039723   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0429 12:25:30.069427   90765 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0429 12:25:30.069457   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0429 12:25:30.073709   90765 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0429 12:25:30.073729   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0429 12:25:30.080905   90765 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0429 12:25:30.080923   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0429 12:25:30.081923   90765 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0429 12:25:30.136775   90765 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0429 12:25:30.136802   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0429 12:25:30.207758   90765 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 12:25:30.291810   90765 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0429 12:25:30.291837   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0429 12:25:30.312527   90765 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0429 12:25:30.312550   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0429 12:25:30.407080   90765 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0429 12:25:30.407109   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0429 12:25:30.417759   90765 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0429 12:25:30.614957   90765 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0429 12:25:30.698896   90765 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0429 12:25:30.698923   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0429 12:25:30.753874   90765 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0429 12:25:30.753909   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0429 12:25:30.964112   90765 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0429 12:25:30.964139   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0429 12:25:31.044927   90765 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0429 12:25:31.044970   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0429 12:25:31.232231   90765 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0429 12:25:31.232261   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0429 12:25:31.323172   90765 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0429 12:25:31.323198   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0429 12:25:31.492332   90765 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0429 12:25:31.563815   90765 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0429 12:25:31.563854   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0429 12:25:32.130457   90765 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0429 12:25:32.130484   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0429 12:25:32.460298   90765 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0429 12:25:35.125613   90765 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0429 12:25:35.125668   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHHostname
	I0429 12:25:35.128714   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:25:35.129133   90765 main.go:141] libmachine: (addons-051772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:60:43", ip: ""} in network mk-addons-051772: {Iface:virbr1 ExpiryTime:2024-04-29 13:24:43 +0000 UTC Type:0 Mac:52:54:00:af:60:43 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:addons-051772 Clientid:01:52:54:00:af:60:43}
	I0429 12:25:35.129184   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined IP address 192.168.39.38 and MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:25:35.129317   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHPort
	I0429 12:25:35.129545   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHKeyPath
	I0429 12:25:35.129695   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHUsername
	I0429 12:25:35.129886   90765 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18771-82690/.minikube/machines/addons-051772/id_rsa Username:docker}
	I0429 12:25:35.433720   90765 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0429 12:25:35.498949   90765 addons.go:234] Setting addon gcp-auth=true in "addons-051772"
	I0429 12:25:35.499038   90765 host.go:66] Checking if "addons-051772" exists ...
	I0429 12:25:35.499379   90765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:25:35.499409   90765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:25:35.514531   90765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42569
	I0429 12:25:35.514969   90765 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:35.515408   90765 main.go:141] libmachine: Using API Version  1
	I0429 12:25:35.515427   90765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:35.515800   90765 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:35.516450   90765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:25:35.516493   90765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:25:35.532010   90765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38713
	I0429 12:25:35.532429   90765 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:35.532874   90765 main.go:141] libmachine: Using API Version  1
	I0429 12:25:35.532901   90765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:35.533216   90765 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:35.533384   90765 main.go:141] libmachine: (addons-051772) Calling .GetState
	I0429 12:25:35.534973   90765 main.go:141] libmachine: (addons-051772) Calling .DriverName
	I0429 12:25:35.535236   90765 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0429 12:25:35.535267   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHHostname
	I0429 12:25:35.537982   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:25:35.538468   90765 main.go:141] libmachine: (addons-051772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:60:43", ip: ""} in network mk-addons-051772: {Iface:virbr1 ExpiryTime:2024-04-29 13:24:43 +0000 UTC Type:0 Mac:52:54:00:af:60:43 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:addons-051772 Clientid:01:52:54:00:af:60:43}
	I0429 12:25:35.538502   90765 main.go:141] libmachine: (addons-051772) DBG | domain addons-051772 has defined IP address 192.168.39.38 and MAC address 52:54:00:af:60:43 in network mk-addons-051772
	I0429 12:25:35.538665   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHPort
	I0429 12:25:35.538860   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHKeyPath
	I0429 12:25:35.539046   90765 main.go:141] libmachine: (addons-051772) Calling .GetSSHUsername
	I0429 12:25:35.539191   90765 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18771-82690/.minikube/machines/addons-051772/id_rsa Username:docker}
	I0429 12:25:37.059350   90765 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.461270439s)
	I0429 12:25:37.059404   90765 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:37.059416   90765 main.go:141] libmachine: (addons-051772) Calling .Close
	I0429 12:25:37.059410   90765 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.237091246s)
	I0429 12:25:37.059460   90765 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.237069664s)
	I0429 12:25:37.059496   90765 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0429 12:25:37.059520   90765 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.162065778s)
	I0429 12:25:37.059540   90765 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:37.059549   90765 main.go:141] libmachine: (addons-051772) Calling .Close
	I0429 12:25:37.059614   90765 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.028130739s)
	I0429 12:25:37.059690   90765 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.008127381s)
	I0429 12:25:37.059702   90765 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:37.059742   90765 main.go:141] libmachine: (addons-051772) Calling .Close
	I0429 12:25:37.059768   90765 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.954794959s)
	I0429 12:25:37.059792   90765 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:37.059802   90765 main.go:141] libmachine: (addons-051772) Calling .Close
	I0429 12:25:37.059899   90765 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.920877923s)
	I0429 12:25:37.059915   90765 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:37.059923   90765 main.go:141] libmachine: (addons-051772) Calling .Close
	I0429 12:25:37.059727   90765 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:37.059984   90765 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.225041795s)
	I0429 12:25:37.059990   90765 main.go:141] libmachine: (addons-051772) Calling .Close
	I0429 12:25:37.059999   90765 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:37.060006   90765 main.go:141] libmachine: (addons-051772) Calling .Close
	I0429 12:25:37.060065   90765 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.195145329s)
	I0429 12:25:37.060079   90765 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:37.060088   90765 main.go:141] libmachine: (addons-051772) Calling .Close
	I0429 12:25:37.060140   90765 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.978184557s)
	I0429 12:25:37.060175   90765 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:37.060189   90765 main.go:141] libmachine: (addons-051772) Calling .Close
	I0429 12:25:37.060348   90765 main.go:141] libmachine: (addons-051772) DBG | Closing plugin on server side
	I0429 12:25:37.060379   90765 main.go:141] libmachine: (addons-051772) DBG | Closing plugin on server side
	I0429 12:25:37.060420   90765 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:37.060428   90765 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:37.060436   90765 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:37.060443   90765 main.go:141] libmachine: (addons-051772) Calling .Close
	I0429 12:25:37.060494   90765 main.go:141] libmachine: (addons-051772) DBG | Closing plugin on server side
	I0429 12:25:37.060522   90765 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:37.060532   90765 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:37.060540   90765 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:37.060546   90765 main.go:141] libmachine: (addons-051772) Calling .Close
	I0429 12:25:37.060601   90765 main.go:141] libmachine: (addons-051772) DBG | Closing plugin on server side
	I0429 12:25:37.060569   90765 node_ready.go:35] waiting up to 6m0s for node "addons-051772" to be "Ready" ...
	I0429 12:25:37.060628   90765 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:37.060636   90765 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:37.060643   90765 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:37.060650   90765 main.go:141] libmachine: (addons-051772) Calling .Close
	I0429 12:25:37.060691   90765 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:37.060693   90765 main.go:141] libmachine: (addons-051772) DBG | Closing plugin on server side
	I0429 12:25:37.060697   90765 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:37.060706   90765 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:37.060713   90765 main.go:141] libmachine: (addons-051772) Calling .Close
	I0429 12:25:37.060721   90765 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:37.060728   90765 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:37.060736   90765 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:37.060743   90765 main.go:141] libmachine: (addons-051772) Calling .Close
	I0429 12:25:37.060742   90765 main.go:141] libmachine: (addons-051772) DBG | Closing plugin on server side
	I0429 12:25:37.060769   90765 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:37.060776   90765 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:37.060785   90765 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:37.060792   90765 main.go:141] libmachine: (addons-051772) Calling .Close
	I0429 12:25:37.060834   90765 main.go:141] libmachine: (addons-051772) DBG | Closing plugin on server side
	I0429 12:25:37.060849   90765 main.go:141] libmachine: (addons-051772) DBG | Closing plugin on server side
	I0429 12:25:37.060865   90765 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:37.060872   90765 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:37.060880   90765 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:37.060886   90765 main.go:141] libmachine: (addons-051772) Calling .Close
	I0429 12:25:37.060924   90765 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:37.060931   90765 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:37.060938   90765 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:37.060951   90765 main.go:141] libmachine: (addons-051772) Calling .Close
	I0429 12:25:37.060985   90765 main.go:141] libmachine: (addons-051772) DBG | Closing plugin on server side
	I0429 12:25:37.061002   90765 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:37.061009   90765 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:37.061015   90765 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:37.061021   90765 main.go:141] libmachine: (addons-051772) Calling .Close
	I0429 12:25:37.061122   90765 main.go:141] libmachine: (addons-051772) DBG | Closing plugin on server side
	I0429 12:25:37.061149   90765 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:37.061156   90765 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:37.061359   90765 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.853548561s)
	I0429 12:25:37.061389   90765 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:37.061400   90765 main.go:141] libmachine: (addons-051772) Calling .Close
	I0429 12:25:37.061470   90765 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.643685704s)
	I0429 12:25:37.061484   90765 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:37.061492   90765 main.go:141] libmachine: (addons-051772) Calling .Close
	I0429 12:25:37.061630   90765 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.446642175s)
	W0429 12:25:37.061660   90765 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0429 12:25:37.061678   90765 retry.go:31] will retry after 294.891532ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0429 12:25:37.061755   90765 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.569388132s)
	I0429 12:25:37.061769   90765 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:37.061778   90765 main.go:141] libmachine: (addons-051772) Calling .Close
	I0429 12:25:37.061845   90765 main.go:141] libmachine: (addons-051772) DBG | Closing plugin on server side
	I0429 12:25:37.061866   90765 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:37.061872   90765 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:37.061881   90765 addons.go:470] Verifying addon ingress=true in "addons-051772"
	I0429 12:25:37.064961   90765 out.go:177] * Verifying ingress addon...
	I0429 12:25:37.064553   90765 main.go:141] libmachine: (addons-051772) DBG | Closing plugin on server side
	I0429 12:25:37.064572   90765 main.go:141] libmachine: (addons-051772) DBG | Closing plugin on server side
	I0429 12:25:37.064586   90765 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:37.067743   90765 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:37.064596   90765 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:37.067813   90765 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:37.064614   90765 main.go:141] libmachine: (addons-051772) DBG | Closing plugin on server side
	I0429 12:25:37.064628   90765 main.go:141] libmachine: (addons-051772) DBG | Closing plugin on server side
	I0429 12:25:37.064634   90765 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:37.064648   90765 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:37.064648   90765 main.go:141] libmachine: (addons-051772) DBG | Closing plugin on server side
	I0429 12:25:37.064667   90765 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:37.064668   90765 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:37.064680   90765 main.go:141] libmachine: (addons-051772) DBG | Closing plugin on server side
	I0429 12:25:37.064683   90765 main.go:141] libmachine: (addons-051772) DBG | Closing plugin on server side
	I0429 12:25:37.064697   90765 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:37.064700   90765 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:37.064733   90765 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:37.064735   90765 main.go:141] libmachine: (addons-051772) DBG | Closing plugin on server side
	I0429 12:25:37.064755   90765 main.go:141] libmachine: (addons-051772) DBG | Closing plugin on server side
	I0429 12:25:37.066913   90765 main.go:141] libmachine: (addons-051772) DBG | Closing plugin on server side
	I0429 12:25:37.066957   90765 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:37.067913   90765 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:37.068004   90765 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:37.068026   90765 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:37.068044   90765 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:37.068058   90765 main.go:141] libmachine: (addons-051772) Calling .Close
	I0429 12:25:37.068067   90765 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:37.068059   90765 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:37.068111   90765 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:37.068030   90765 main.go:141] libmachine: (addons-051772) Calling .Close
	I0429 12:25:37.068103   90765 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:37.069433   90765 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-051772 service yakd-dashboard -n yakd-dashboard
	
	I0429 12:25:37.068159   90765 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:37.068160   90765 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:37.067922   90765 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:37.068423   90765 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:37.068448   90765 main.go:141] libmachine: (addons-051772) DBG | Closing plugin on server side
	I0429 12:25:37.068469   90765 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:37.068493   90765 main.go:141] libmachine: (addons-051772) DBG | Closing plugin on server side
	I0429 12:25:37.068668   90765 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0429 12:25:37.070623   90765 addons.go:470] Verifying addon registry=true in "addons-051772"
	I0429 12:25:37.070657   90765 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:37.070661   90765 main.go:141] libmachine: (addons-051772) Calling .Close
	I0429 12:25:37.070677   90765 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:37.071943   90765 out.go:177] * Verifying registry addon...
	I0429 12:25:37.070879   90765 main.go:141] libmachine: (addons-051772) DBG | Closing plugin on server side
	I0429 12:25:37.071710   90765 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:37.073198   90765 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:37.073211   90765 addons.go:470] Verifying addon metrics-server=true in "addons-051772"
	I0429 12:25:37.073848   90765 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0429 12:25:37.098926   90765 node_ready.go:49] node "addons-051772" has status "Ready":"True"
	I0429 12:25:37.098951   90765 node_ready.go:38] duration metric: took 38.334852ms for node "addons-051772" to be "Ready" ...
	I0429 12:25:37.098960   90765 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 12:25:37.135273   90765 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0429 12:25:37.135306   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:37.137453   90765 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0429 12:25:37.137476   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:37.144396   90765 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:37.144412   90765 main.go:141] libmachine: (addons-051772) Calling .Close
	I0429 12:25:37.144695   90765 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:37.144709   90765 main.go:141] libmachine: Making call to close connection to plugin binary
	W0429 12:25:37.144797   90765 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0429 12:25:37.171705   90765 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-j2g9d" in "kube-system" namespace to be "Ready" ...
	I0429 12:25:37.181264   90765 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:37.181288   90765 main.go:141] libmachine: (addons-051772) Calling .Close
	I0429 12:25:37.181642   90765 main.go:141] libmachine: (addons-051772) DBG | Closing plugin on server side
	I0429 12:25:37.181696   90765 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:37.181704   90765 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:37.357189   90765 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0429 12:25:37.564332   90765 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-051772" context rescaled to 1 replicas
	I0429 12:25:37.581872   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:37.590286   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:38.082990   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:38.083230   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:38.377563   90765 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.842294739s)
	I0429 12:25:38.379150   90765 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0429 12:25:38.377807   90765 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.917450162s)
	I0429 12:25:38.380554   90765 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:38.381770   90765 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0429 12:25:38.380576   90765 main.go:141] libmachine: (addons-051772) Calling .Close
	I0429 12:25:38.383145   90765 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0429 12:25:38.383169   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0429 12:25:38.383490   90765 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:38.383510   90765 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:38.383520   90765 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:38.383529   90765 main.go:141] libmachine: (addons-051772) Calling .Close
	I0429 12:25:38.383565   90765 main.go:141] libmachine: (addons-051772) DBG | Closing plugin on server side
	I0429 12:25:38.383821   90765 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:38.383840   90765 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:38.383852   90765 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-051772"
	I0429 12:25:38.385289   90765 out.go:177] * Verifying csi-hostpath-driver addon...
	I0429 12:25:38.387219   90765 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0429 12:25:38.425202   90765 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0429 12:25:38.425224   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:38.439298   90765 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0429 12:25:38.439323   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0429 12:25:38.558028   90765 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0429 12:25:38.558055   90765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0429 12:25:38.576007   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:38.578893   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:38.729667   90765 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0429 12:25:38.898954   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:39.076006   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:39.081472   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:39.178740   90765 pod_ready.go:102] pod "coredns-7db6d8ff4d-j2g9d" in "kube-system" namespace has status "Ready":"False"
	I0429 12:25:39.395939   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:39.528907   90765 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.17165122s)
	I0429 12:25:39.528966   90765 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:39.528983   90765 main.go:141] libmachine: (addons-051772) Calling .Close
	I0429 12:25:39.529454   90765 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:39.529476   90765 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:39.529495   90765 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:39.529504   90765 main.go:141] libmachine: (addons-051772) Calling .Close
	I0429 12:25:39.529818   90765 main.go:141] libmachine: (addons-051772) DBG | Closing plugin on server side
	I0429 12:25:39.529835   90765 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:39.529851   90765 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:39.577920   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:39.579405   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:39.940050   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:39.977206   90765 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.247490201s)
	I0429 12:25:39.977263   90765 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:39.977279   90765 main.go:141] libmachine: (addons-051772) Calling .Close
	I0429 12:25:39.977630   90765 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:39.977649   90765 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:39.977660   90765 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:39.977667   90765 main.go:141] libmachine: (addons-051772) Calling .Close
	I0429 12:25:39.977688   90765 main.go:141] libmachine: (addons-051772) DBG | Closing plugin on server side
	I0429 12:25:39.977914   90765 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:39.977936   90765 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:39.977955   90765 main.go:141] libmachine: (addons-051772) DBG | Closing plugin on server side
	I0429 12:25:39.979723   90765 addons.go:470] Verifying addon gcp-auth=true in "addons-051772"
	I0429 12:25:39.982257   90765 out.go:177] * Verifying gcp-auth addon...
	I0429 12:25:39.984662   90765 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0429 12:25:40.011975   90765 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0429 12:25:40.012000   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:40.077254   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:40.089686   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:40.393383   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:40.489151   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:40.575798   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:40.578305   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:40.895347   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:40.989063   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:41.093710   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:41.101040   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:41.398684   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:41.490435   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:41.575732   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:41.578109   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:41.678523   90765 pod_ready.go:102] pod "coredns-7db6d8ff4d-j2g9d" in "kube-system" namespace has status "Ready":"False"
	I0429 12:25:41.892714   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:41.989028   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:42.075498   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:42.077942   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:42.395128   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:42.492651   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:42.576472   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:42.580360   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:42.893447   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:42.988802   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:43.075777   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:43.078347   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:43.174576   90765 pod_ready.go:97] error getting pod "coredns-7db6d8ff4d-j2g9d" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-j2g9d" not found
	I0429 12:25:43.174610   90765 pod_ready.go:81] duration metric: took 6.002875343s for pod "coredns-7db6d8ff4d-j2g9d" in "kube-system" namespace to be "Ready" ...
	E0429 12:25:43.174626   90765 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-7db6d8ff4d-j2g9d" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-j2g9d" not found
	I0429 12:25:43.174634   90765 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jlh2m" in "kube-system" namespace to be "Ready" ...
	I0429 12:25:43.394509   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:43.491016   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:43.575260   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:43.578539   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:43.893535   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:43.988933   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:44.074872   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:44.077755   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:44.393166   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:44.490566   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:44.575778   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:44.578224   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:44.892458   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:44.987670   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:45.076014   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:45.078710   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:45.180689   90765 pod_ready.go:102] pod "coredns-7db6d8ff4d-jlh2m" in "kube-system" namespace has status "Ready":"False"
	I0429 12:25:45.393728   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:45.489191   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:45.576669   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:45.578409   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:45.893349   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:45.989876   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:46.075185   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:46.078205   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:46.394112   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:46.492710   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:46.576183   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:46.579689   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:46.892364   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:46.992717   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:47.077035   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:47.078536   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:47.180896   90765 pod_ready.go:102] pod "coredns-7db6d8ff4d-jlh2m" in "kube-system" namespace has status "Ready":"False"
	I0429 12:25:47.393252   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:47.488121   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:47.581883   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:47.583259   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:47.892705   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:47.988477   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:48.075361   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:48.078191   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:48.394293   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:48.489019   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:48.575237   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:48.578237   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:48.892194   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:48.988797   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:49.076079   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:49.078926   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:49.181177   90765 pod_ready.go:102] pod "coredns-7db6d8ff4d-jlh2m" in "kube-system" namespace has status "Ready":"False"
	I0429 12:25:49.394153   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:49.488810   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:49.576978   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:49.578980   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:49.897084   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:49.988869   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:50.074411   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:50.078340   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:50.393713   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:50.496432   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:50.575709   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:50.578565   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:50.898577   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:50.988127   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:51.075137   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:51.077818   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:51.182035   90765 pod_ready.go:102] pod "coredns-7db6d8ff4d-jlh2m" in "kube-system" namespace has status "Ready":"False"
	I0429 12:25:51.393946   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:51.489092   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:51.575046   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:51.577743   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:51.893812   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:51.988305   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:52.075349   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:52.078494   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:52.393524   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:52.489240   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:52.575159   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:52.577916   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:52.893538   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:52.990665   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:53.076131   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:53.079480   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:53.393996   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:53.491203   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:53.576257   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:53.579975   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:53.682096   90765 pod_ready.go:102] pod "coredns-7db6d8ff4d-jlh2m" in "kube-system" namespace has status "Ready":"False"
	I0429 12:25:53.896081   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:53.992840   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:54.074249   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:54.078695   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:54.393978   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:54.488493   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:54.575563   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:54.578906   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:54.893506   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:54.989352   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:55.076340   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:55.079400   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:55.395702   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:55.490611   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:55.576406   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:55.579355   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:55.895009   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:55.990402   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:56.075987   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:56.078402   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:56.180836   90765 pod_ready.go:102] pod "coredns-7db6d8ff4d-jlh2m" in "kube-system" namespace has status "Ready":"False"
	I0429 12:25:56.393887   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:56.488301   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:56.576205   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:56.578590   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:56.892202   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:56.988736   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:57.075762   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:57.078399   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:57.393939   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:57.488545   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:57.576337   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:57.578784   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:57.894293   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:57.992561   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:58.076586   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:58.079422   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:58.182278   90765 pod_ready.go:102] pod "coredns-7db6d8ff4d-jlh2m" in "kube-system" namespace has status "Ready":"False"
	I0429 12:25:58.398931   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:58.490785   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:58.576729   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:58.579618   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:58.893265   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:58.988550   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:59.075510   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:59.078031   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:59.394323   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:59.488845   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:25:59.576245   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:25:59.579362   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:25:59.893719   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:25:59.988962   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:00.075880   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:00.079388   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:00.394546   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:00.495015   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:00.574872   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:00.581808   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:00.681632   90765 pod_ready.go:102] pod "coredns-7db6d8ff4d-jlh2m" in "kube-system" namespace has status "Ready":"False"
	I0429 12:26:00.897065   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:00.988424   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:01.077479   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:01.080111   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:01.392773   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:01.488673   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:01.578354   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:01.581950   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:01.893695   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:01.988739   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:02.076121   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:02.084149   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:02.394362   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:02.488930   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:02.575000   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:02.578544   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:02.681932   90765 pod_ready.go:102] pod "coredns-7db6d8ff4d-jlh2m" in "kube-system" namespace has status "Ready":"False"
	I0429 12:26:02.894181   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:02.988950   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:03.076044   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:03.079356   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:03.395815   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:03.487717   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:03.575861   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:03.578545   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:03.893196   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:03.991852   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:04.074960   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:04.077744   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:04.397256   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:04.492726   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:04.576112   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:04.578847   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:04.892483   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:04.989988   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:05.074844   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:05.078142   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:05.181149   90765 pod_ready.go:102] pod "coredns-7db6d8ff4d-jlh2m" in "kube-system" namespace has status "Ready":"False"
	I0429 12:26:05.399010   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:05.489409   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:05.576027   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:05.578783   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:05.894538   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:05.989186   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:06.075711   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:06.079040   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:06.393016   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:06.488993   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:06.577624   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:06.580558   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:06.896096   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:06.990672   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:07.080992   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:07.081151   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:07.182706   90765 pod_ready.go:102] pod "coredns-7db6d8ff4d-jlh2m" in "kube-system" namespace has status "Ready":"False"
	I0429 12:26:07.394944   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:07.490828   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:07.576016   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:07.578905   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:07.899217   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:07.990131   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:08.078070   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:08.079915   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:08.395027   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:08.488879   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:08.577616   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:08.582860   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:08.891476   90765 pod_ready.go:92] pod "coredns-7db6d8ff4d-jlh2m" in "kube-system" namespace has status "Ready":"True"
	I0429 12:26:08.891501   90765 pod_ready.go:81] duration metric: took 25.716859627s for pod "coredns-7db6d8ff4d-jlh2m" in "kube-system" namespace to be "Ready" ...
	I0429 12:26:08.891511   90765 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-051772" in "kube-system" namespace to be "Ready" ...
	I0429 12:26:08.919257   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:08.951944   90765 pod_ready.go:92] pod "etcd-addons-051772" in "kube-system" namespace has status "Ready":"True"
	I0429 12:26:08.951969   90765 pod_ready.go:81] duration metric: took 60.450945ms for pod "etcd-addons-051772" in "kube-system" namespace to be "Ready" ...
	I0429 12:26:08.951982   90765 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-051772" in "kube-system" namespace to be "Ready" ...
	I0429 12:26:08.967537   90765 pod_ready.go:92] pod "kube-apiserver-addons-051772" in "kube-system" namespace has status "Ready":"True"
	I0429 12:26:08.967561   90765 pod_ready.go:81] duration metric: took 15.571952ms for pod "kube-apiserver-addons-051772" in "kube-system" namespace to be "Ready" ...
	I0429 12:26:08.967572   90765 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-051772" in "kube-system" namespace to be "Ready" ...
	I0429 12:26:08.975280   90765 pod_ready.go:92] pod "kube-controller-manager-addons-051772" in "kube-system" namespace has status "Ready":"True"
	I0429 12:26:08.975299   90765 pod_ready.go:81] duration metric: took 7.720656ms for pod "kube-controller-manager-addons-051772" in "kube-system" namespace to be "Ready" ...
	I0429 12:26:08.975309   90765 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-njdqg" in "kube-system" namespace to be "Ready" ...
	I0429 12:26:08.984473   90765 pod_ready.go:92] pod "kube-proxy-njdqg" in "kube-system" namespace has status "Ready":"True"
	I0429 12:26:08.984493   90765 pod_ready.go:81] duration metric: took 9.176906ms for pod "kube-proxy-njdqg" in "kube-system" namespace to be "Ready" ...
	I0429 12:26:08.984504   90765 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-051772" in "kube-system" namespace to be "Ready" ...
	I0429 12:26:08.992994   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:09.074838   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:09.077529   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:09.079294   90765 pod_ready.go:92] pod "kube-scheduler-addons-051772" in "kube-system" namespace has status "Ready":"True"
	I0429 12:26:09.079311   90765 pod_ready.go:81] duration metric: took 94.799176ms for pod "kube-scheduler-addons-051772" in "kube-system" namespace to be "Ready" ...
	I0429 12:26:09.079319   90765 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-52v7v" in "kube-system" namespace to be "Ready" ...
	I0429 12:26:09.395309   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:09.488189   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:09.574845   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:09.578118   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:09.902928   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:09.988435   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:10.081200   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:10.082086   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:10.392981   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:10.488550   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:10.576170   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:10.578751   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:10.893959   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:10.997188   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:11.074599   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:11.077872   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:11.084267   90765 pod_ready.go:92] pod "metrics-server-c59844bb4-52v7v" in "kube-system" namespace has status "Ready":"True"
	I0429 12:26:11.084287   90765 pod_ready.go:81] duration metric: took 2.004961629s for pod "metrics-server-c59844bb4-52v7v" in "kube-system" namespace to be "Ready" ...
	I0429 12:26:11.084299   90765 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-hdg5d" in "kube-system" namespace to be "Ready" ...
	I0429 12:26:11.393174   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:11.482922   90765 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-hdg5d" in "kube-system" namespace has status "Ready":"True"
	I0429 12:26:11.482947   90765 pod_ready.go:81] duration metric: took 398.638421ms for pod "nvidia-device-plugin-daemonset-hdg5d" in "kube-system" namespace to be "Ready" ...
	I0429 12:26:11.482971   90765 pod_ready.go:38] duration metric: took 34.384002359s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 12:26:11.482992   90765 api_server.go:52] waiting for apiserver process to appear ...
	I0429 12:26:11.483083   90765 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 12:26:11.494708   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:11.507514   90765 api_server.go:72] duration metric: took 43.546044228s to wait for apiserver process to appear ...
	I0429 12:26:11.507547   90765 api_server.go:88] waiting for apiserver healthz status ...
	I0429 12:26:11.507575   90765 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0429 12:26:11.511950   90765 api_server.go:279] https://192.168.39.38:8443/healthz returned 200:
	ok
	I0429 12:26:11.513047   90765 api_server.go:141] control plane version: v1.30.0
	I0429 12:26:11.513070   90765 api_server.go:131] duration metric: took 5.514784ms to wait for apiserver health ...
	I0429 12:26:11.513078   90765 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 12:26:11.575151   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:11.580578   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:11.686323   90765 system_pods.go:59] 18 kube-system pods found
	I0429 12:26:11.686355   90765 system_pods.go:61] "coredns-7db6d8ff4d-jlh2m" [b3f37502-dc66-4d7c-a47d-660d3806f4b8] Running
	I0429 12:26:11.686363   90765 system_pods.go:61] "csi-hostpath-attacher-0" [1a2df975-832d-4796-aa55-108738e687dd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0429 12:26:11.686371   90765 system_pods.go:61] "csi-hostpath-resizer-0" [11533409-d444-41c9-b198-cabc12cc10ae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0429 12:26:11.686379   90765 system_pods.go:61] "csi-hostpathplugin-pnc6v" [f4ed19b6-06e4-4106-bcab-7b4101e75496] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0429 12:26:11.686383   90765 system_pods.go:61] "etcd-addons-051772" [a76cd430-bd9a-46c5-9947-7e0aef1e4953] Running
	I0429 12:26:11.686388   90765 system_pods.go:61] "kube-apiserver-addons-051772" [98bed16c-494d-4b77-b9b6-1b6e4ffab242] Running
	I0429 12:26:11.686391   90765 system_pods.go:61] "kube-controller-manager-addons-051772" [28b5f3cb-1d9d-4c55-a103-b9fcbe7475cc] Running
	I0429 12:26:11.686395   90765 system_pods.go:61] "kube-ingress-dns-minikube" [3f65b6bb-031b-4047-a0c4-333553ae0a8e] Running
	I0429 12:26:11.686398   90765 system_pods.go:61] "kube-proxy-njdqg" [ea3276cc-1e5f-462e-b9db-feca5c8983ea] Running
	I0429 12:26:11.686401   90765 system_pods.go:61] "kube-scheduler-addons-051772" [3c187c8b-c651-47aa-a92e-a423e2f511a8] Running
	I0429 12:26:11.686404   90765 system_pods.go:61] "metrics-server-c59844bb4-52v7v" [00789e18-96fd-48cf-aac2-1dff3b7046c1] Running
	I0429 12:26:11.686408   90765 system_pods.go:61] "nvidia-device-plugin-daemonset-hdg5d" [8d78a716-78c0-4ec1-aa75-4a1757524a08] Running
	I0429 12:26:11.686411   90765 system_pods.go:61] "registry-jlgc4" [9d9bc9f0-f92a-4894-917d-18a54af96e8f] Running
	I0429 12:26:11.686416   90765 system_pods.go:61] "registry-proxy-4hmwj" [f42027ec-8780-4684-9de4-696063e93160] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0429 12:26:11.686421   90765 system_pods.go:61] "snapshot-controller-745499f584-tdssq" [460a606c-b64b-49d7-b87a-f3b7855e9a71] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0429 12:26:11.686427   90765 system_pods.go:61] "snapshot-controller-745499f584-xdtb9" [e6790f84-388b-459a-b144-709ee471462a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0429 12:26:11.686431   90765 system_pods.go:61] "storage-provisioner" [4b2ab54d-1ceb-4a1a-b4db-311b827d52f1] Running
	I0429 12:26:11.686440   90765 system_pods.go:61] "tiller-deploy-6677d64bcd-f4xt8" [9a11663a-a4d0-4dbe-80d4-5a21e11bde15] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0429 12:26:11.686450   90765 system_pods.go:74] duration metric: took 173.362591ms to wait for pod list to return data ...
	I0429 12:26:11.686458   90765 default_sa.go:34] waiting for default service account to be created ...
	I0429 12:26:11.878160   90765 default_sa.go:45] found service account: "default"
	I0429 12:26:11.878187   90765 default_sa.go:55] duration metric: took 191.723959ms for default service account to be created ...
	I0429 12:26:11.878198   90765 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 12:26:11.893033   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:11.989627   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:12.075948   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:12.079696   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:12.091794   90765 system_pods.go:86] 18 kube-system pods found
	I0429 12:26:12.091821   90765 system_pods.go:89] "coredns-7db6d8ff4d-jlh2m" [b3f37502-dc66-4d7c-a47d-660d3806f4b8] Running
	I0429 12:26:12.091830   90765 system_pods.go:89] "csi-hostpath-attacher-0" [1a2df975-832d-4796-aa55-108738e687dd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0429 12:26:12.091837   90765 system_pods.go:89] "csi-hostpath-resizer-0" [11533409-d444-41c9-b198-cabc12cc10ae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0429 12:26:12.091844   90765 system_pods.go:89] "csi-hostpathplugin-pnc6v" [f4ed19b6-06e4-4106-bcab-7b4101e75496] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0429 12:26:12.091849   90765 system_pods.go:89] "etcd-addons-051772" [a76cd430-bd9a-46c5-9947-7e0aef1e4953] Running
	I0429 12:26:12.091856   90765 system_pods.go:89] "kube-apiserver-addons-051772" [98bed16c-494d-4b77-b9b6-1b6e4ffab242] Running
	I0429 12:26:12.091863   90765 system_pods.go:89] "kube-controller-manager-addons-051772" [28b5f3cb-1d9d-4c55-a103-b9fcbe7475cc] Running
	I0429 12:26:12.091869   90765 system_pods.go:89] "kube-ingress-dns-minikube" [3f65b6bb-031b-4047-a0c4-333553ae0a8e] Running
	I0429 12:26:12.091880   90765 system_pods.go:89] "kube-proxy-njdqg" [ea3276cc-1e5f-462e-b9db-feca5c8983ea] Running
	I0429 12:26:12.091887   90765 system_pods.go:89] "kube-scheduler-addons-051772" [3c187c8b-c651-47aa-a92e-a423e2f511a8] Running
	I0429 12:26:12.091894   90765 system_pods.go:89] "metrics-server-c59844bb4-52v7v" [00789e18-96fd-48cf-aac2-1dff3b7046c1] Running
	I0429 12:26:12.091900   90765 system_pods.go:89] "nvidia-device-plugin-daemonset-hdg5d" [8d78a716-78c0-4ec1-aa75-4a1757524a08] Running
	I0429 12:26:12.091905   90765 system_pods.go:89] "registry-jlgc4" [9d9bc9f0-f92a-4894-917d-18a54af96e8f] Running
	I0429 12:26:12.091909   90765 system_pods.go:89] "registry-proxy-4hmwj" [f42027ec-8780-4684-9de4-696063e93160] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0429 12:26:12.091920   90765 system_pods.go:89] "snapshot-controller-745499f584-tdssq" [460a606c-b64b-49d7-b87a-f3b7855e9a71] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0429 12:26:12.091926   90765 system_pods.go:89] "snapshot-controller-745499f584-xdtb9" [e6790f84-388b-459a-b144-709ee471462a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0429 12:26:12.091932   90765 system_pods.go:89] "storage-provisioner" [4b2ab54d-1ceb-4a1a-b4db-311b827d52f1] Running
	I0429 12:26:12.091938   90765 system_pods.go:89] "tiller-deploy-6677d64bcd-f4xt8" [9a11663a-a4d0-4dbe-80d4-5a21e11bde15] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0429 12:26:12.091945   90765 system_pods.go:126] duration metric: took 213.741858ms to wait for k8s-apps to be running ...
	I0429 12:26:12.091958   90765 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 12:26:12.092009   90765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:26:12.113031   90765 system_svc.go:56] duration metric: took 21.06371ms WaitForService to wait for kubelet
	I0429 12:26:12.113062   90765 kubeadm.go:576] duration metric: took 44.151597164s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 12:26:12.113085   90765 node_conditions.go:102] verifying NodePressure condition ...
	I0429 12:26:12.279375   90765 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 12:26:12.279406   90765 node_conditions.go:123] node cpu capacity is 2
	I0429 12:26:12.279419   90765 node_conditions.go:105] duration metric: took 166.32838ms to run NodePressure ...
	I0429 12:26:12.279431   90765 start.go:240] waiting for startup goroutines ...
	I0429 12:26:12.393027   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:12.488350   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:12.575036   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:12.578136   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:12.894738   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:12.988830   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:13.075923   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:13.080708   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:13.393038   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:13.488510   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:13.575952   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:13.579195   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:13.893541   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:13.988147   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:14.074966   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:14.077691   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:14.393670   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:14.488677   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:14.575747   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:14.578882   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:14.892831   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:14.988894   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:15.076184   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:15.080797   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:15.394694   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:15.488984   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:15.575252   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:15.578305   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:15.895220   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:15.996928   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:16.074728   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:16.082363   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:16.394492   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:16.489267   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:16.576150   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:16.579221   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:16.896374   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:16.988607   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:17.075497   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:17.078496   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:17.399472   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:17.490292   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:17.575273   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:17.577729   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:17.893920   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:17.988332   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:18.075636   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:18.078550   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:18.393282   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:18.488818   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:18.575996   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:18.578506   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:18.895558   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:18.988206   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:19.075222   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:19.078608   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:19.393531   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:19.488282   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:19.578728   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:19.580677   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:19.895059   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:19.991529   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:20.076437   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:20.086678   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:20.395773   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:20.489414   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:20.576124   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:20.578860   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:20.896706   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:20.989106   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:21.075639   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:21.078138   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:21.394485   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:21.490247   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:21.575361   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:21.578341   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:21.898024   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:21.990158   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:22.075614   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:22.081620   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:22.393003   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:22.489152   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:22.576228   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:22.579028   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:22.893875   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:22.988975   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:23.075658   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:23.078389   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:23.402807   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:23.488437   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:23.575031   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:23.577967   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:23.893649   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:23.988387   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:24.080128   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:24.084296   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:24.393451   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:24.488626   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:24.575488   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:24.578933   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:24.893828   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:24.988756   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:25.075674   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:25.078650   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:25.393405   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:25.489065   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:25.574727   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:25.577547   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:25.892924   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:25.990005   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:26.075319   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:26.078070   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:26.394007   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:26.489188   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:26.575496   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:26.580062   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:26.893887   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:26.989568   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:27.076144   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:27.079591   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:27.394497   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:27.489667   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:27.576456   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:27.579560   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:27.897727   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:27.990072   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:28.074916   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:28.077606   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:28.393263   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:28.489336   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:28.575736   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:28.579419   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:28.892942   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:28.988552   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:29.076411   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:29.082475   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:29.394121   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:29.489164   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:29.587194   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:29.587332   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:29.900761   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:29.988878   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:30.074761   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:30.078462   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:30.393624   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:30.488391   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:30.576066   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:30.580168   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 12:26:30.893163   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:30.988525   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:31.083587   90765 kapi.go:107] duration metric: took 54.009734901s to wait for kubernetes.io/minikube-addons=registry ...
	I0429 12:26:31.083639   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:31.394695   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:31.490311   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:31.577799   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:31.894060   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:31.988321   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:32.075461   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:32.393996   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:32.488728   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:32.575610   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:32.893426   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:32.991411   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:33.075506   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:33.392230   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:33.489814   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:33.576196   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:33.911605   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:33.988335   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:34.076003   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:34.393380   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:34.489453   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:34.578173   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:34.893013   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:34.989833   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:35.075230   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:35.393382   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:35.489338   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:35.575409   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:35.893299   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:35.989542   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:36.075759   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:36.392520   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:36.488415   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:36.575359   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:36.893366   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:36.988940   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:37.075420   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:37.400435   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:37.489463   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:37.768327   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:37.898323   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:37.992873   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:38.075778   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:38.393845   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:38.488176   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:38.578463   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:38.908343   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:38.990780   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:39.080772   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:39.408793   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:39.491848   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:39.576895   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:39.898767   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:39.988989   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:40.075969   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:40.399938   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:40.488138   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:40.575455   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:40.898803   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:40.988753   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:41.076467   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:41.393357   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:41.489528   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:41.575474   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:41.894276   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:41.988832   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:42.075166   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:42.400579   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:42.489246   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:42.575764   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:42.893552   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:42.988456   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:43.074948   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:43.396653   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:43.489338   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:43.577783   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:43.897196   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:43.989058   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:44.075832   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:44.394045   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:44.489009   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:44.575243   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:44.892989   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:44.988870   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:45.074872   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:45.414972   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:45.497557   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:45.575807   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:45.892101   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:45.988727   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:46.076342   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:46.405663   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:46.488533   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:46.575761   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:46.946324   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:46.990941   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:47.075774   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:47.392855   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:47.489254   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:47.578517   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:47.896875   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:47.988239   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:48.075531   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:48.396950   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:48.493582   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:48.581857   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:48.894192   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:48.989288   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:49.076796   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:49.392164   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:49.490123   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:49.575413   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:49.896251   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:49.990162   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:50.075733   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:50.397122   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:50.488106   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:50.574899   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:50.892364   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:50.988650   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:51.075518   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:51.395236   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:51.492939   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:51.575185   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:51.892752   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:51.988567   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:52.079690   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:52.398385   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:52.488664   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:52.575643   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:52.900830   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:52.988654   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:53.079496   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:53.393240   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:53.488391   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:53.575111   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:53.892966   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 12:26:53.988960   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:54.074734   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:54.392327   90765 kapi.go:107] duration metric: took 1m16.005101868s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0429 12:26:54.488916   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:54.578367   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:54.991191   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:55.075990   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:55.488440   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:55.576562   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:55.988579   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:56.076140   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:56.489342   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:56.576199   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:56.989519   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:57.076966   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:57.488762   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:57.576130   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:57.989740   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:58.075558   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:58.489121   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:58.575159   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:58.988110   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:59.075505   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:59.489148   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:26:59.576735   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:26:59.989152   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:00.075625   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:00.489213   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:00.575656   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:00.988250   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:01.077659   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:01.489838   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:01.576986   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:01.989752   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:02.076323   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:02.488425   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:02.579699   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:02.989682   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:03.076877   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:03.488705   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:03.576078   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:03.987885   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:04.078652   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:04.489865   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:04.575796   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:04.988851   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:05.075659   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:05.490506   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:05.575215   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:05.990883   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:06.074861   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:06.489943   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:06.703926   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:06.989363   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:07.076117   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:07.488394   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:07.576646   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:07.989077   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:08.077729   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:08.488947   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:08.576127   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:08.988536   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:09.075970   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:09.491733   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:09.575777   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:09.989380   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:10.077941   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:10.489731   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:10.576412   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:10.988447   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:11.077714   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:11.489181   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:11.575958   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:11.988951   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:12.076086   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:12.488765   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:12.576015   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:12.988269   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:13.075976   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:13.488706   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:13.579652   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:13.989370   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:14.076253   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:14.489920   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:14.575264   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:14.988358   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:15.075849   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:15.488880   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:15.576514   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:15.988076   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:16.075294   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:16.488934   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:16.576225   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:16.989333   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:17.075968   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:17.489248   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:17.575898   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:17.989469   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:18.075688   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:18.488664   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:18.575890   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:18.989711   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:19.076716   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:19.489860   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:19.575402   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:19.989711   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:20.075871   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:20.489521   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:20.576951   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:20.989030   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:21.075550   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:21.488884   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:21.575305   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:21.988722   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:22.075587   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:22.488488   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:22.576395   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:22.988605   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:23.075415   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:23.489438   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:23.575914   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:23.988883   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:24.076564   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:24.490239   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:24.575540   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:24.989531   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:25.075945   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:25.488312   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:25.577101   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:25.988881   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:26.075160   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:26.488785   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:26.576725   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:26.989031   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:27.075564   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:27.488220   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:27.576620   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:27.988847   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:28.080249   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:28.489317   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:28.575727   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:28.988993   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:29.075431   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:29.490533   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:29.575709   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:29.989136   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:30.079304   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:30.494385   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:30.576893   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:30.989438   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:31.077093   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:31.489651   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:31.575887   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:31.988672   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:32.078504   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:32.488663   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:32.580760   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:32.989208   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:33.076214   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:33.489291   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:33.575235   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:33.988608   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:34.076165   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:34.489327   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:34.575578   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:34.988783   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:35.076221   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:35.490132   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:35.576247   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:35.988619   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:36.075239   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:36.490662   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:36.576422   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:36.988756   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:37.075943   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:37.488985   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:37.575829   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:37.988849   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:38.075774   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:38.488719   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:38.576346   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:38.992379   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:39.075423   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:39.488706   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:39.576058   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:39.988052   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:40.075455   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:40.488368   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:40.578487   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:40.988800   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:41.077699   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:41.488944   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:41.576021   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:41.988984   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:42.075445   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:42.489033   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:42.575607   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:42.988795   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:43.075461   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:43.489548   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:43.575608   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:43.989053   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:44.075382   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:44.492040   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:44.576755   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:44.989333   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:45.076842   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:45.489210   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:45.575680   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:45.989577   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:46.076612   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:46.491772   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:46.576540   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:46.989295   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:47.076199   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:47.489413   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:47.579241   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:47.988457   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:48.075479   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:48.488999   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:48.575309   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:48.988509   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:49.075854   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:49.489582   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:49.576159   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:49.988114   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:50.075335   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:50.488818   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:50.575775   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:50.988697   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:51.075888   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:51.489601   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:51.578051   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:51.989398   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:52.076527   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:52.489814   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:52.575557   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:52.989424   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:53.076090   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:53.489528   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:53.577971   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:53.989459   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:54.077917   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:54.488944   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:54.577111   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:54.989022   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:55.076750   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:55.489058   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:55.575976   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:55.990672   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:56.075796   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:56.489267   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:56.575526   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:56.989165   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:57.076364   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:57.500326   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:57.578337   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:57.988708   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:58.077044   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:58.488980   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:58.574940   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:58.989739   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:59.077148   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:59.487821   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:27:59.576119   90765 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 12:27:59.988543   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:28:00.076061   90765 kapi.go:107] duration metric: took 2m23.007389801s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0429 12:28:00.488960   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:28:00.988953   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:28:01.490019   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:28:01.991852   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:28:02.488330   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:28:02.989090   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:28:03.488982   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:28:03.989198   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:28:04.489312   90765 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 12:28:04.989257   90765 kapi.go:107] duration metric: took 2m25.004593046s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0429 12:28:04.991200   90765 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-051772 cluster.
	I0429 12:28:04.992736   90765 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0429 12:28:04.994149   90765 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0429 12:28:04.995676   90765 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, nvidia-device-plugin, cloud-spanner, yakd, helm-tiller, inspektor-gadget, metrics-server, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0429 12:28:04.997192   90765 addons.go:505] duration metric: took 2m37.035844117s for enable addons: enabled=[storage-provisioner ingress-dns nvidia-device-plugin cloud-spanner yakd helm-tiller inspektor-gadget metrics-server storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0429 12:28:04.997236   90765 start.go:245] waiting for cluster config update ...
	I0429 12:28:04.997254   90765 start.go:254] writing updated cluster config ...
	I0429 12:28:04.997537   90765 ssh_runner.go:195] Run: rm -f paused
	I0429 12:28:05.055005   90765 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 12:28:05.056889   90765 out.go:177] * Done! kubectl is now configured to use "addons-051772" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD
	10fcc90bcde52       a416a98b71e22       Less than a second ago   Running             helper-pod                               0                   5c8b5e9474b61       helper-pod-delete-pvc-3e9f07d3-7c03-4974-9a62-4df0aaccce94
	7270b5d64309d       ba5dc23f65d4c       2 seconds ago            Exited              busybox                                  0                   ab6822d120e08       test-local-path
	7531688405601       dd1b12fcb6097       5 seconds ago            Running             hello-world-app                          0                   c700742d85bc8       hello-world-app-86c47465fc-6rj25
	359a07e313754       98f6c3b32d565       15 seconds ago           Exited              helm-test                                0                   0295ef18e13f3       helm-test
	55d15aac481c0       f4215f6ee683f       19 seconds ago           Running             nginx                                    0                   07cb1c2c57a81       nginx
	4af8a03db4771       beae173ccac6a       23 seconds ago           Exited              registry-test                            0                   cc17540a0231d       registry-test
	fcd6e4b9916bc       db2fc13d44d50       44 seconds ago           Running             gcp-auth                                 0                   bb9b86e0d480a       gcp-auth-5db96cd9b4-ggvwr
	738b8ef163256       738351fd438f0       About a minute ago       Running             csi-snapshotter                          0                   2f7335f70dea3       csi-hostpathplugin-pnc6v
	01f7383e9e85c       931dbfd16f87c       About a minute ago       Running             csi-provisioner                          0                   2f7335f70dea3       csi-hostpathplugin-pnc6v
	9d6476d41e040       e899260153aed       About a minute ago       Running             liveness-probe                           0                   2f7335f70dea3       csi-hostpathplugin-pnc6v
	9bc1014237901       e255e073c508c       About a minute ago       Running             hostpath                                 0                   2f7335f70dea3       csi-hostpathplugin-pnc6v
	df5491d0f0e69       88ef14a257f42       About a minute ago       Running             node-driver-registrar                    0                   2f7335f70dea3       csi-hostpathplugin-pnc6v
	51a1975846556       19a639eda60f0       2 minutes ago            Running             csi-resizer                              0                   ffd96b39d9686       csi-hostpath-resizer-0
	25c7e3c1ecc8b       59cbb42146a37       2 minutes ago            Running             csi-attacher                             0                   fd542904144d7       csi-hostpath-attacher-0
	916094ed04d48       a1ed5895ba635       2 minutes ago            Running             csi-external-health-monitor-controller   0                   2f7335f70dea3       csi-hostpathplugin-pnc6v
	ed9ef003e0df6       b29d748098e32       2 minutes ago            Exited              patch                                    1                   d73e3cbacbf17       ingress-nginx-admission-patch-q9kt9
	fde87e38c5f6e       b29d748098e32       2 minutes ago            Exited              create                                   0                   bc8c502d4fd07       ingress-nginx-admission-create-hswjz
	d042bb3ed5d59       aa61ee9c70bc4       2 minutes ago            Running             volume-snapshot-controller               0                   4b85ad03852ff       snapshot-controller-745499f584-xdtb9
	8eccbd30da97c       aa61ee9c70bc4       2 minutes ago            Running             volume-snapshot-controller               0                   4ea9e74c1885b       snapshot-controller-745499f584-tdssq
	f79cc2e359101       e16d1e3a10667       2 minutes ago            Running             local-path-provisioner                   0                   0665f14344da0       local-path-provisioner-8d985888d-grtkh
	de948adf488dc       31de47c733c91       2 minutes ago            Running             yakd                                     0                   c5dae140a0922       yakd-dashboard-5ddbf7d777-5mxzv
	c2b261372a60e       1a9bd6f561b5c       2 minutes ago            Running             cloud-spanner-emulator                   0                   11b991c28a75d       cloud-spanner-emulator-8677549d7-gsd8s
	28a244a503aac       fa3ba2723b886       3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   c2d2374b84f45       nvidia-device-plugin-daemonset-hdg5d
	30ab8201e58a0       6e38f40d628db       3 minutes ago            Running             storage-provisioner                      0                   f8459198a4b18       storage-provisioner
	eae36aadcfb62       cbb01a7bd410d       3 minutes ago            Running             coredns                                  0                   fd4f365f1ff56       coredns-7db6d8ff4d-jlh2m
	c795e62d75734       a0bf559e280cf       3 minutes ago            Running             kube-proxy                               0                   bc63ccc450f04       kube-proxy-njdqg
	1ef0606adf004       c7aad43836fa5       3 minutes ago            Running             kube-controller-manager                  0                   c7e86b98f8c6a       kube-controller-manager-addons-051772
	eae471e0af9b1       259c8277fcbbc       3 minutes ago            Running             kube-scheduler                           0                   ed80dcd5f43aa       kube-scheduler-addons-051772
	722d2648688c4       3861cfcd7c04c       3 minutes ago            Running             etcd                                     0                   c6f9dfd342075       etcd-addons-051772
	a1a4d51c58521       c42f13656d0b2       3 minutes ago            Running             kube-apiserver                           0                   9328480a6ece8       kube-apiserver-addons-051772
	
	
	==> containerd <==
	Apr 29 12:28:45 addons-051772 containerd[653]: time="2024-04-29T12:28:45.538690359Z" level=info msg="shim disconnected" id=7270b5d64309dbe3d1ac996783903bf7669bfb3ee67f3a31500d4c5963cdbf34 namespace=k8s.io
	Apr 29 12:28:45 addons-051772 containerd[653]: time="2024-04-29T12:28:45.538876374Z" level=warning msg="cleaning up after shim disconnected" id=7270b5d64309dbe3d1ac996783903bf7669bfb3ee67f3a31500d4c5963cdbf34 namespace=k8s.io
	Apr 29 12:28:45 addons-051772 containerd[653]: time="2024-04-29T12:28:45.539009755Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Apr 29 12:28:46 addons-051772 containerd[653]: time="2024-04-29T12:28:46.875567455Z" level=info msg="StopPodSandbox for \"ab6822d120e0808af99c5174c42d49fd41cedf0946160bf6ef0345e689d0827e\""
	Apr 29 12:28:46 addons-051772 containerd[653]: time="2024-04-29T12:28:46.875664851Z" level=info msg="Container to stop \"7270b5d64309dbe3d1ac996783903bf7669bfb3ee67f3a31500d4c5963cdbf34\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Apr 29 12:28:46 addons-051772 containerd[653]: time="2024-04-29T12:28:46.924619229Z" level=info msg="shim disconnected" id=ab6822d120e0808af99c5174c42d49fd41cedf0946160bf6ef0345e689d0827e namespace=k8s.io
	Apr 29 12:28:46 addons-051772 containerd[653]: time="2024-04-29T12:28:46.924695495Z" level=warning msg="cleaning up after shim disconnected" id=ab6822d120e0808af99c5174c42d49fd41cedf0946160bf6ef0345e689d0827e namespace=k8s.io
	Apr 29 12:28:46 addons-051772 containerd[653]: time="2024-04-29T12:28:46.924707737Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Apr 29 12:28:47 addons-051772 containerd[653]: time="2024-04-29T12:28:47.021738636Z" level=info msg="TearDown network for sandbox \"ab6822d120e0808af99c5174c42d49fd41cedf0946160bf6ef0345e689d0827e\" successfully"
	Apr 29 12:28:47 addons-051772 containerd[653]: time="2024-04-29T12:28:47.021826482Z" level=info msg="StopPodSandbox for \"ab6822d120e0808af99c5174c42d49fd41cedf0946160bf6ef0345e689d0827e\" returns successfully"
	Apr 29 12:28:47 addons-051772 containerd[653]: time="2024-04-29T12:28:47.828983960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:helper-pod-delete-pvc-3e9f07d3-7c03-4974-9a62-4df0aaccce94,Uid:c088e024-da99-4099-be32-33c2e95ff5bc,Namespace:local-path-storage,Attempt:0,}"
	Apr 29 12:28:48 addons-051772 containerd[653]: time="2024-04-29T12:28:48.064801565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 12:28:48 addons-051772 containerd[653]: time="2024-04-29T12:28:48.066392502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 12:28:48 addons-051772 containerd[653]: time="2024-04-29T12:28:48.075554990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 12:28:48 addons-051772 containerd[653]: time="2024-04-29T12:28:48.076589096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 12:28:48 addons-051772 containerd[653]: time="2024-04-29T12:28:48.196538004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:helper-pod-delete-pvc-3e9f07d3-7c03-4974-9a62-4df0aaccce94,Uid:c088e024-da99-4099-be32-33c2e95ff5bc,Namespace:local-path-storage,Attempt:0,} returns sandbox id \"5c8b5e9474b6124c3dd3e7e5340b2f63cd9817608e119b42d47df1d10bf9c70d\""
	Apr 29 12:28:48 addons-051772 containerd[653]: time="2024-04-29T12:28:48.205367548Z" level=info msg="CreateContainer within sandbox \"5c8b5e9474b6124c3dd3e7e5340b2f63cd9817608e119b42d47df1d10bf9c70d\" for container &ContainerMetadata{Name:helper-pod,Attempt:0,}"
	Apr 29 12:28:48 addons-051772 containerd[653]: time="2024-04-29T12:28:48.230209105Z" level=info msg="CreateContainer within sandbox \"5c8b5e9474b6124c3dd3e7e5340b2f63cd9817608e119b42d47df1d10bf9c70d\" for &ContainerMetadata{Name:helper-pod,Attempt:0,} returns container id \"10fcc90bcde52b1b0d47af516ce81c067baafae0a20faac3b71aa917f522f399\""
	Apr 29 12:28:48 addons-051772 containerd[653]: time="2024-04-29T12:28:48.233101176Z" level=info msg="StartContainer for \"10fcc90bcde52b1b0d47af516ce81c067baafae0a20faac3b71aa917f522f399\""
	Apr 29 12:28:48 addons-051772 containerd[653]: time="2024-04-29T12:28:48.354885702Z" level=info msg="StartContainer for \"10fcc90bcde52b1b0d47af516ce81c067baafae0a20faac3b71aa917f522f399\" returns successfully"
	Apr 29 12:28:48 addons-051772 containerd[653]: time="2024-04-29T12:28:48.414905481Z" level=info msg="shim disconnected" id=10fcc90bcde52b1b0d47af516ce81c067baafae0a20faac3b71aa917f522f399 namespace=k8s.io
	Apr 29 12:28:48 addons-051772 containerd[653]: time="2024-04-29T12:28:48.415360839Z" level=warning msg="cleaning up after shim disconnected" id=10fcc90bcde52b1b0d47af516ce81c067baafae0a20faac3b71aa917f522f399 namespace=k8s.io
	Apr 29 12:28:48 addons-051772 containerd[653]: time="2024-04-29T12:28:48.415766564Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Apr 29 12:28:48 addons-051772 containerd[653]: time="2024-04-29T12:28:48.472957436Z" level=info msg="StopContainer for \"f79cc2e359101a0acbce94070170f871ad228dd6b0163c13883412ebdfddeaeb\" with timeout 30 (s)"
	Apr 29 12:28:48 addons-051772 containerd[653]: time="2024-04-29T12:28:48.475072581Z" level=info msg="Stop container \"f79cc2e359101a0acbce94070170f871ad228dd6b0163c13883412ebdfddeaeb\" with signal terminated"
	
	
	==> coredns [eae36aadcfb62d163f59ab10f6f9d58c81505d22d0232dd8d8f7232741f7b094] <==
	[INFO] 10.244.0.21:44188 - 27179 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000167331s
	[INFO] 10.244.0.21:44188 - 4481 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000123481s
	[INFO] 10.244.0.21:44188 - 30917 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000286788s
	[INFO] 10.244.0.21:44188 - 47326 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000124648s
	[INFO] 10.244.0.21:38413 - 14754 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000118965s
	[INFO] 10.244.0.21:38413 - 21354 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000078853s
	[INFO] 10.244.0.21:38413 - 24233 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000044519s
	[INFO] 10.244.0.21:38413 - 24583 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000043574s
	[INFO] 10.244.0.21:38413 - 34690 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000061982s
	[INFO] 10.244.0.21:38413 - 55782 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000072196s
	[INFO] 10.244.0.21:38413 - 24018 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000245958s
	[INFO] 10.244.0.21:46535 - 55566 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000096469s
	[INFO] 10.244.0.21:46535 - 5089 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000077656s
	[INFO] 10.244.0.21:53180 - 40470 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000043693s
	[INFO] 10.244.0.21:53180 - 25250 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000067397s
	[INFO] 10.244.0.21:46535 - 14736 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000290126s
	[INFO] 10.244.0.21:53180 - 37018 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000060654s
	[INFO] 10.244.0.21:46535 - 6849 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000052784s
	[INFO] 10.244.0.21:53180 - 50156 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000285135s
	[INFO] 10.244.0.21:53180 - 18566 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000037759s
	[INFO] 10.244.0.21:53180 - 64294 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000039871s
	[INFO] 10.244.0.21:46535 - 63655 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000038794s
	[INFO] 10.244.0.21:53180 - 7075 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000069856s
	[INFO] 10.244.0.21:46535 - 59193 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000101497s
	[INFO] 10.244.0.21:46535 - 37899 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000071368s
	
	
	==> describe nodes <==
	Name:               addons-051772
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-051772
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=94a01df0d4b48636d4af3d06a53be687e06c0844
	                    minikube.k8s.io/name=addons-051772
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T12_25_15_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-051772
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-051772"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 12:25:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-051772
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 12:28:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 12:28:19 +0000   Mon, 29 Apr 2024 12:25:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 12:28:19 +0000   Mon, 29 Apr 2024 12:25:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 12:28:19 +0000   Mon, 29 Apr 2024 12:25:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 12:28:19 +0000   Mon, 29 Apr 2024 12:25:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.38
	  Hostname:    addons-051772
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb2b953fb6624a76895eaab8dcc1b643
	  System UUID:                cb2b953f-b662-4a76-895e-aab8dcc1b643
	  Boot ID:                    6c1bb4a9-e7fe-4c8b-a3e1-17d8120aa28d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.15
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-8677549d7-gsd8s                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m17s
	  default                     hello-world-app-86c47465fc-6rj25                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  default                     nginx                                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	  gcp-auth                    gcp-auth-5db96cd9b4-ggvwr                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m9s
	  kube-system                 coredns-7db6d8ff4d-jlh2m                                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m20s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m10s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m10s
	  kube-system                 csi-hostpathplugin-pnc6v                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m10s
	  kube-system                 etcd-addons-051772                                            100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3m34s
	  kube-system                 kube-apiserver-addons-051772                                  250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 kube-controller-manager-addons-051772                         200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m34s
	  kube-system                 kube-proxy-njdqg                                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m21s
	  kube-system                 kube-scheduler-addons-051772                                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m34s
	  kube-system                 nvidia-device-plugin-daemonset-hdg5d                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m17s
	  kube-system                 snapshot-controller-745499f584-tdssq                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m12s
	  kube-system                 snapshot-controller-745499f584-xdtb9                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m12s
	  kube-system                 storage-provisioner                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m15s
	  local-path-storage          helper-pod-delete-pvc-3e9f07d3-7c03-4974-9a62-4df0aaccce94    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         1s
	  local-path-storage          local-path-provisioner-8d985888d-grtkh                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m15s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-5mxzv                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     3m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m19s  kube-proxy       
	  Normal  Starting                 3m34s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m34s  kubelet          Node addons-051772 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m34s  kubelet          Node addons-051772 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m34s  kubelet          Node addons-051772 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m34s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m33s  kubelet          Node addons-051772 status is now: NodeReady
	  Normal  RegisteredNode           3m21s  node-controller  Node addons-051772 event: Registered Node addons-051772 in Controller
	
	
	==> dmesg <==
	[  +5.011388] systemd-fstab-generator[869]: Ignoring "noauto" option for root device
	[  +0.059769] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.775313] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.728751] systemd-fstab-generator[1239]: Ignoring "noauto" option for root device
	[ +13.901804] systemd-fstab-generator[1434]: Ignoring "noauto" option for root device
	[  +0.116414] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.450714] kauditd_printk_skb: 121 callbacks suppressed
	[  +5.076259] kauditd_printk_skb: 150 callbacks suppressed
	[  +7.583367] kauditd_printk_skb: 64 callbacks suppressed
	[Apr29 12:26] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.021967] kauditd_printk_skb: 1 callbacks suppressed
	[ +15.806594] kauditd_printk_skb: 6 callbacks suppressed
	[ +12.530197] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.011562] kauditd_printk_skb: 53 callbacks suppressed
	[  +5.791490] kauditd_printk_skb: 44 callbacks suppressed
	[Apr29 12:27] kauditd_printk_skb: 24 callbacks suppressed
	[ +25.906907] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.032553] kauditd_printk_skb: 11 callbacks suppressed
	[Apr29 12:28] kauditd_printk_skb: 10 callbacks suppressed
	[ +13.766721] kauditd_printk_skb: 19 callbacks suppressed
	[  +7.962582] kauditd_printk_skb: 35 callbacks suppressed
	[  +5.314517] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.095298] kauditd_printk_skb: 16 callbacks suppressed
	[  +6.270810] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.004162] kauditd_printk_skb: 20 callbacks suppressed
	
	
	==> etcd [722d2648688c4dfb1407b1687fdf4a333307a63919707bc854b33e9717bbd1e8] <==
	{"level":"info","ts":"2024-04-29T12:26:42.351012Z","caller":"traceutil/trace.go:171","msg":"trace[1225991486] range","detail":"{range_begin:/registry/pods/ingress-nginx/ingress-nginx-admission-patch-q9kt9; range_end:; response_count:1; response_revision:1028; }","duration":"187.347673ms","start":"2024-04-29T12:26:42.163656Z","end":"2024-04-29T12:26:42.351004Z","steps":["trace[1225991486] 'agreement among raft nodes before linearized reading'  (duration: 187.228521ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T12:26:42.351272Z","caller":"traceutil/trace.go:171","msg":"trace[1785384892] transaction","detail":"{read_only:false; response_revision:1028; number_of_response:1; }","duration":"220.489709ms","start":"2024-04-29T12:26:42.130771Z","end":"2024-04-29T12:26:42.351261Z","steps":["trace[1785384892] 'process raft request'  (duration: 210.423659ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T12:26:45.367594Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.393043ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16202420019180541515 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-create-hswjz\" mod_revision:1051 > success:<request_put:<key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-create-hswjz\" value_size:4187 >> failure:<request_range:<key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-create-hswjz\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-29T12:26:45.367793Z","caller":"traceutil/trace.go:171","msg":"trace[530039096] transaction","detail":"{read_only:false; response_revision:1054; number_of_response:1; }","duration":"142.949785ms","start":"2024-04-29T12:26:45.224829Z","end":"2024-04-29T12:26:45.367779Z","steps":["trace[530039096] 'process raft request'  (duration: 11.961489ms)","trace[530039096] 'compare'  (duration: 129.902652ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T12:26:45.372815Z","caller":"traceutil/trace.go:171","msg":"trace[925850070] transaction","detail":"{read_only:false; response_revision:1055; number_of_response:1; }","duration":"147.590931ms","start":"2024-04-29T12:26:45.225046Z","end":"2024-04-29T12:26:45.372637Z","steps":["trace[925850070] 'process raft request'  (duration: 142.668386ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T12:26:46.916416Z","caller":"traceutil/trace.go:171","msg":"trace[32630600] linearizableReadLoop","detail":"{readStateIndex:1111; appliedIndex:1110; }","duration":"106.520532ms","start":"2024-04-29T12:26:46.80935Z","end":"2024-04-29T12:26:46.915871Z","steps":["trace[32630600] 'read index received'  (duration: 106.367248ms)","trace[32630600] 'applied index is now lower than readState.Index'  (duration: 152.858µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T12:26:46.917324Z","caller":"traceutil/trace.go:171","msg":"trace[110249673] transaction","detail":"{read_only:false; response_revision:1078; number_of_response:1; }","duration":"116.912506ms","start":"2024-04-29T12:26:46.800316Z","end":"2024-04-29T12:26:46.917228Z","steps":["trace[110249673] 'process raft request'  (duration: 115.457498ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T12:26:46.918127Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.764799ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/gcp-auth/gcp-auth-certs\" ","response":"range_response_count:1 size:1742"}
	{"level":"info","ts":"2024-04-29T12:26:46.918186Z","caller":"traceutil/trace.go:171","msg":"trace[1181621634] range","detail":"{range_begin:/registry/secrets/gcp-auth/gcp-auth-certs; range_end:; response_count:1; response_revision:1078; }","duration":"108.831566ms","start":"2024-04-29T12:26:46.809344Z","end":"2024-04-29T12:26:46.918176Z","steps":["trace[1181621634] 'agreement among raft nodes before linearized reading'  (duration: 108.698404ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T12:27:06.676373Z","caller":"traceutil/trace.go:171","msg":"trace[280218010] linearizableReadLoop","detail":"{readStateIndex:1200; appliedIndex:1199; }","duration":"124.185473ms","start":"2024-04-29T12:27:06.552165Z","end":"2024-04-29T12:27:06.67635Z","steps":["trace[280218010] 'read index received'  (duration: 124.033517ms)","trace[280218010] 'applied index is now lower than readState.Index'  (duration: 151.589µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T12:27:06.676944Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.752383ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14386"}
	{"level":"info","ts":"2024-04-29T12:27:06.677725Z","caller":"traceutil/trace.go:171","msg":"trace[1723609689] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1163; }","duration":"125.401646ms","start":"2024-04-29T12:27:06.552161Z","end":"2024-04-29T12:27:06.677563Z","steps":["trace[1723609689] 'agreement among raft nodes before linearized reading'  (duration: 124.47332ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T12:27:06.677174Z","caller":"traceutil/trace.go:171","msg":"trace[754465344] transaction","detail":"{read_only:false; response_revision:1163; number_of_response:1; }","duration":"200.942971ms","start":"2024-04-29T12:27:06.476077Z","end":"2024-04-29T12:27:06.67702Z","steps":["trace[754465344] 'process raft request'  (duration: 200.172786ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T12:28:12.0163Z","caller":"traceutil/trace.go:171","msg":"trace[666590967] linearizableReadLoop","detail":"{readStateIndex:1372; appliedIndex:1371; }","duration":"225.828847ms","start":"2024-04-29T12:28:11.790429Z","end":"2024-04-29T12:28:12.016258Z","steps":["trace[666590967] 'read index received'  (duration: 196.48267ms)","trace[666590967] 'applied index is now lower than readState.Index'  (duration: 29.34528ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T12:28:12.019633Z","caller":"traceutil/trace.go:171","msg":"trace[1892640728] transaction","detail":"{read_only:false; response_revision:1319; number_of_response:1; }","duration":"226.665237ms","start":"2024-04-29T12:28:11.792946Z","end":"2024-04-29T12:28:12.019611Z","steps":["trace[1892640728] 'process raft request'  (duration: 193.9565ms)","trace[1892640728] 'compare'  (duration: 29.047711ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T12:28:12.019947Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"229.49462ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-c59844bb4-52v7v\" ","response":"range_response_count:1 size:4291"}
	{"level":"info","ts":"2024-04-29T12:28:12.020003Z","caller":"traceutil/trace.go:171","msg":"trace[1233045194] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-c59844bb4-52v7v; range_end:; response_count:1; response_revision:1319; }","duration":"250.440262ms","start":"2024-04-29T12:28:11.769547Z","end":"2024-04-29T12:28:12.019987Z","steps":["trace[1233045194] 'agreement among raft nodes before linearized reading'  (duration: 250.288437ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T12:28:12.020967Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.649374ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/kube-system/metrics-server-auth-reader\" ","response":"range_response_count:1 size:1243"}
	{"level":"info","ts":"2024-04-29T12:28:12.021004Z","caller":"traceutil/trace.go:171","msg":"trace[476817725] range","detail":"{range_begin:/registry/rolebindings/kube-system/metrics-server-auth-reader; range_end:; response_count:1; response_revision:1323; }","duration":"227.704253ms","start":"2024-04-29T12:28:11.793289Z","end":"2024-04-29T12:28:12.020993Z","steps":["trace[476817725] 'agreement among raft nodes before linearized reading'  (duration: 227.546523ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T12:28:12.021186Z","caller":"traceutil/trace.go:171","msg":"trace[235376680] transaction","detail":"{read_only:false; response_revision:1320; number_of_response:1; }","duration":"227.824785ms","start":"2024-04-29T12:28:11.79335Z","end":"2024-04-29T12:28:12.021175Z","steps":["trace[235376680] 'process raft request'  (duration: 227.293305ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T12:28:12.021383Z","caller":"traceutil/trace.go:171","msg":"trace[1163833718] transaction","detail":"{read_only:false; response_revision:1321; number_of_response:1; }","duration":"206.135888ms","start":"2024-04-29T12:28:11.815235Z","end":"2024-04-29T12:28:12.021371Z","steps":["trace[1163833718] 'process raft request'  (duration: 205.471326ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T12:28:12.021689Z","caller":"traceutil/trace.go:171","msg":"trace[197844913] transaction","detail":"{read_only:false; response_revision:1322; number_of_response:1; }","duration":"205.786071ms","start":"2024-04-29T12:28:11.815893Z","end":"2024-04-29T12:28:12.021679Z","steps":["trace[197844913] 'process raft request'  (duration: 204.850016ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T12:28:12.021788Z","caller":"traceutil/trace.go:171","msg":"trace[240408290] transaction","detail":"{read_only:false; response_revision:1323; number_of_response:1; }","duration":"201.417758ms","start":"2024-04-29T12:28:11.820363Z","end":"2024-04-29T12:28:12.021781Z","steps":["trace[240408290] 'process raft request'  (duration: 200.42892ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T12:28:12.022381Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.142887ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T12:28:12.02241Z","caller":"traceutil/trace.go:171","msg":"trace[494024205] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1323; }","duration":"175.192345ms","start":"2024-04-29T12:28:11.847208Z","end":"2024-04-29T12:28:12.0224Z","steps":["trace[494024205] 'agreement among raft nodes before linearized reading'  (duration: 175.143372ms)"],"step_count":1}
	
	
	==> gcp-auth [fcd6e4b9916bc94f950a58584495daa63b010a693590e9385e9267974e9444cb] <==
	2024/04/29 12:28:03 GCP Auth Webhook started!
	2024/04/29 12:28:14 Ready to marshal response ...
	2024/04/29 12:28:14 Ready to write response ...
	2024/04/29 12:28:17 Ready to marshal response ...
	2024/04/29 12:28:17 Ready to write response ...
	2024/04/29 12:28:17 Ready to marshal response ...
	2024/04/29 12:28:17 Ready to write response ...
	2024/04/29 12:28:18 Ready to marshal response ...
	2024/04/29 12:28:18 Ready to write response ...
	2024/04/29 12:28:27 Ready to marshal response ...
	2024/04/29 12:28:27 Ready to write response ...
	2024/04/29 12:28:28 Ready to marshal response ...
	2024/04/29 12:28:28 Ready to write response ...
	2024/04/29 12:28:36 Ready to marshal response ...
	2024/04/29 12:28:36 Ready to write response ...
	2024/04/29 12:28:47 Ready to marshal response ...
	2024/04/29 12:28:47 Ready to write response ...
	
	
	==> kernel <==
	 12:28:48 up 4 min,  0 users,  load average: 1.05, 0.90, 0.41
	Linux addons-051772 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a1a4d51c5852161f673710f2453096c86d8fb5a76dde5555f881e513ea6fd0a7] <==
	I0429 12:25:36.006343       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0429 12:25:36.006439       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0429 12:25:36.546298       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller" clusterIPs={"IPv4":"10.111.109.122"}
	I0429 12:25:36.611042       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller-admission" clusterIPs={"IPv4":"10.99.174.98"}
	I0429 12:25:36.660342       1 controller.go:615] quota admission added evaluator for: jobs.batch
	I0429 12:25:38.112112       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.98.86.191"}
	I0429 12:25:38.141938       1 controller.go:615] quota admission added evaluator for: statefulsets.apps
	I0429 12:25:38.305757       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.106.101.186"}
	I0429 12:25:39.715714       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.111.9.10"}
	W0429 12:26:10.988385       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 12:26:10.988536       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0429 12:26:10.989439       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.192.212:443/apis/metrics.k8s.io/v1beta1: Get "https://10.104.192.212:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.104.192.212:443: connect: connection refused
	I0429 12:26:11.026409       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0429 12:28:11.636582       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0429 12:28:12.013766       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	W0429 12:28:12.691077       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0429 12:28:17.179882       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0429 12:28:17.350614       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.107.32"}
	I0429 12:28:29.476704       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0429 12:28:33.214344       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.39.38:8443->10.244.0.26:45190: read: connection reset by peer
	I0429 12:28:36.881391       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.195.67"}
	E0429 12:28:48.540525       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0429 12:28:48.549309       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0429 12:28:48.567387       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [1ef0606adf004391356226e83717617c75aedda3f68013ba5f4caa14697e4f6a] <==
	E0429 12:28:14.008830       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 12:28:15.677218       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 12:28:15.677636       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 12:28:20.493145       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 12:28:20.493192       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0429 12:28:21.769308       1 namespace_controller.go:182] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	I0429 12:28:27.405813       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0429 12:28:27.406040       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 12:28:27.492141       1 replica_set.go:676] "Finished syncing" logger="replicationcontroller-controller" kind="ReplicationController" key="kube-system/registry" duration="5.774µs"
	I0429 12:28:27.829406       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0429 12:28:27.829507       1 shared_informer.go:320] Caches are synced for garbage collector
	W0429 12:28:30.560358       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 12:28:30.560927       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0429 12:28:35.483943       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-6677d64bcd" duration="5.705µs"
	I0429 12:28:36.735155       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="45.257944ms"
	I0429 12:28:36.751251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="16.024544ms"
	I0429 12:28:36.751649       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="324.481µs"
	I0429 12:28:36.756013       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="23.049µs"
	I0429 12:28:38.875735       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0429 12:28:38.896893       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-84df5799c" duration="5.807µs"
	I0429 12:28:38.902018       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0429 12:28:42.885362       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="12.924639ms"
	I0429 12:28:42.886125       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="97.855µs"
	I0429 12:28:48.449513       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-8d985888d" duration="7.27µs"
	I0429 12:28:48.912323       1 namespace_controller.go:182] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	
	
	==> kube-proxy [c795e62d7573465526e71082b18539760b60e2927b535662bfa0e29f57395de6] <==
	I0429 12:25:29.069366       1 server_linux.go:69] "Using iptables proxy"
	I0429 12:25:29.084793       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.38"]
	I0429 12:25:29.194314       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 12:25:29.194354       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 12:25:29.194370       1 server_linux.go:165] "Using iptables Proxier"
	I0429 12:25:29.198400       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 12:25:29.198697       1 server.go:872] "Version info" version="v1.30.0"
	I0429 12:25:29.198713       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 12:25:29.199642       1 config.go:192] "Starting service config controller"
	I0429 12:25:29.199651       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 12:25:29.199678       1 config.go:101] "Starting endpoint slice config controller"
	I0429 12:25:29.199682       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 12:25:29.202376       1 config.go:319] "Starting node config controller"
	I0429 12:25:29.202383       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 12:25:29.301609       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 12:25:29.301684       1 shared_informer.go:320] Caches are synced for service config
	I0429 12:25:29.311124       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [eae471e0af9b1c7b9d7c65838c2dd2bc130a87b8eb07c9525c85b339eb37a1be] <==
	W0429 12:25:11.926238       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 12:25:11.926403       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 12:25:12.742314       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 12:25:12.742373       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 12:25:12.760604       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 12:25:12.760663       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 12:25:12.779663       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 12:25:12.779719       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 12:25:12.848846       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 12:25:12.848952       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 12:25:12.912994       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0429 12:25:12.913090       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 12:25:12.916739       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 12:25:12.916793       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 12:25:12.962982       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 12:25:12.963035       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 12:25:12.982543       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 12:25:12.982603       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 12:25:13.009698       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 12:25:13.009738       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 12:25:13.140527       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 12:25:13.140590       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 12:25:13.147913       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 12:25:13.147966       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0429 12:25:16.113323       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 12:28:42 addons-051772 kubelet[1246]: I0429 12:28:42.852789    1246 scope.go:117] "RemoveContainer" containerID="9f94d2f1aa51bf1f7f7e8fd4e48cdddb9063243546111d8b07c8e41cc886fd88"
	Apr 29 12:28:42 addons-051772 kubelet[1246]: I0429 12:28:42.866701    1246 scope.go:117] "RemoveContainer" containerID="9f94d2f1aa51bf1f7f7e8fd4e48cdddb9063243546111d8b07c8e41cc886fd88"
	Apr 29 12:28:42 addons-051772 kubelet[1246]: E0429 12:28:42.867718    1246 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9f94d2f1aa51bf1f7f7e8fd4e48cdddb9063243546111d8b07c8e41cc886fd88\": not found" containerID="9f94d2f1aa51bf1f7f7e8fd4e48cdddb9063243546111d8b07c8e41cc886fd88"
	Apr 29 12:28:42 addons-051772 kubelet[1246]: I0429 12:28:42.867762    1246 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9f94d2f1aa51bf1f7f7e8fd4e48cdddb9063243546111d8b07c8e41cc886fd88"} err="failed to get container status \"9f94d2f1aa51bf1f7f7e8fd4e48cdddb9063243546111d8b07c8e41cc886fd88\": rpc error: code = NotFound desc = an error occurred when try to find container \"9f94d2f1aa51bf1f7f7e8fd4e48cdddb9063243546111d8b07c8e41cc886fd88\": not found"
	Apr 29 12:28:45 addons-051772 kubelet[1246]: I0429 12:28:45.885611    1246 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-86c47465fc-6rj25" podStartSLOduration=4.68789421 podStartE2EDuration="9.885595283s" podCreationTimestamp="2024-04-29 12:28:36 +0000 UTC" firstStartedPulling="2024-04-29 12:28:37.358638218 +0000 UTC m=+203.028731694" lastFinishedPulling="2024-04-29 12:28:42.556339288 +0000 UTC m=+208.226432767" observedRunningTime="2024-04-29 12:28:42.874709125 +0000 UTC m=+208.544802622" watchObservedRunningTime="2024-04-29 12:28:45.885595283 +0000 UTC m=+211.555688779"
	Apr 29 12:28:47 addons-051772 kubelet[1246]: I0429 12:28:47.172375    1246 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/03a0c5cb-d2ea-43a8-a9ae-4f6aa00086e6-pvc-3e9f07d3-7c03-4974-9a62-4df0aaccce94\") pod \"03a0c5cb-d2ea-43a8-a9ae-4f6aa00086e6\" (UID: \"03a0c5cb-d2ea-43a8-a9ae-4f6aa00086e6\") "
	Apr 29 12:28:47 addons-051772 kubelet[1246]: I0429 12:28:47.172439    1246 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdzz5\" (UniqueName: \"kubernetes.io/projected/03a0c5cb-d2ea-43a8-a9ae-4f6aa00086e6-kube-api-access-jdzz5\") pod \"03a0c5cb-d2ea-43a8-a9ae-4f6aa00086e6\" (UID: \"03a0c5cb-d2ea-43a8-a9ae-4f6aa00086e6\") "
	Apr 29 12:28:47 addons-051772 kubelet[1246]: I0429 12:28:47.172576    1246 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/03a0c5cb-d2ea-43a8-a9ae-4f6aa00086e6-gcp-creds\") pod \"03a0c5cb-d2ea-43a8-a9ae-4f6aa00086e6\" (UID: \"03a0c5cb-d2ea-43a8-a9ae-4f6aa00086e6\") "
	Apr 29 12:28:47 addons-051772 kubelet[1246]: I0429 12:28:47.173357    1246 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03a0c5cb-d2ea-43a8-a9ae-4f6aa00086e6-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "03a0c5cb-d2ea-43a8-a9ae-4f6aa00086e6" (UID: "03a0c5cb-d2ea-43a8-a9ae-4f6aa00086e6"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Apr 29 12:28:47 addons-051772 kubelet[1246]: I0429 12:28:47.173547    1246 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03a0c5cb-d2ea-43a8-a9ae-4f6aa00086e6-pvc-3e9f07d3-7c03-4974-9a62-4df0aaccce94" (OuterVolumeSpecName: "data") pod "03a0c5cb-d2ea-43a8-a9ae-4f6aa00086e6" (UID: "03a0c5cb-d2ea-43a8-a9ae-4f6aa00086e6"). InnerVolumeSpecName "pvc-3e9f07d3-7c03-4974-9a62-4df0aaccce94". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Apr 29 12:28:47 addons-051772 kubelet[1246]: I0429 12:28:47.176762    1246 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03a0c5cb-d2ea-43a8-a9ae-4f6aa00086e6-kube-api-access-jdzz5" (OuterVolumeSpecName: "kube-api-access-jdzz5") pod "03a0c5cb-d2ea-43a8-a9ae-4f6aa00086e6" (UID: "03a0c5cb-d2ea-43a8-a9ae-4f6aa00086e6"). InnerVolumeSpecName "kube-api-access-jdzz5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 29 12:28:47 addons-051772 kubelet[1246]: I0429 12:28:47.273330    1246 reconciler_common.go:289] "Volume detached for volume \"pvc-3e9f07d3-7c03-4974-9a62-4df0aaccce94\" (UniqueName: \"kubernetes.io/host-path/03a0c5cb-d2ea-43a8-a9ae-4f6aa00086e6-pvc-3e9f07d3-7c03-4974-9a62-4df0aaccce94\") on node \"addons-051772\" DevicePath \"\""
	Apr 29 12:28:47 addons-051772 kubelet[1246]: I0429 12:28:47.273368    1246 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-jdzz5\" (UniqueName: \"kubernetes.io/projected/03a0c5cb-d2ea-43a8-a9ae-4f6aa00086e6-kube-api-access-jdzz5\") on node \"addons-051772\" DevicePath \"\""
	Apr 29 12:28:47 addons-051772 kubelet[1246]: I0429 12:28:47.273384    1246 reconciler_common.go:289] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/03a0c5cb-d2ea-43a8-a9ae-4f6aa00086e6-gcp-creds\") on node \"addons-051772\" DevicePath \"\""
	Apr 29 12:28:47 addons-051772 kubelet[1246]: I0429 12:28:47.525008    1246 topology_manager.go:215] "Topology Admit Handler" podUID="c088e024-da99-4099-be32-33c2e95ff5bc" podNamespace="local-path-storage" podName="helper-pod-delete-pvc-3e9f07d3-7c03-4974-9a62-4df0aaccce94"
	Apr 29 12:28:47 addons-051772 kubelet[1246]: E0429 12:28:47.525109    1246 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e8531a22-4176-4274-8a3d-56c252f22ada" containerName="controller"
	Apr 29 12:28:47 addons-051772 kubelet[1246]: E0429 12:28:47.525124    1246 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="03a0c5cb-d2ea-43a8-a9ae-4f6aa00086e6" containerName="busybox"
	Apr 29 12:28:47 addons-051772 kubelet[1246]: I0429 12:28:47.525170    1246 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8531a22-4176-4274-8a3d-56c252f22ada" containerName="controller"
	Apr 29 12:28:47 addons-051772 kubelet[1246]: I0429 12:28:47.525177    1246 memory_manager.go:354] "RemoveStaleState removing state" podUID="03a0c5cb-d2ea-43a8-a9ae-4f6aa00086e6" containerName="busybox"
	Apr 29 12:28:47 addons-051772 kubelet[1246]: I0429 12:28:47.676687    1246 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/c088e024-da99-4099-be32-33c2e95ff5bc-data\") pod \"helper-pod-delete-pvc-3e9f07d3-7c03-4974-9a62-4df0aaccce94\" (UID: \"c088e024-da99-4099-be32-33c2e95ff5bc\") " pod="local-path-storage/helper-pod-delete-pvc-3e9f07d3-7c03-4974-9a62-4df0aaccce94"
	Apr 29 12:28:47 addons-051772 kubelet[1246]: I0429 12:28:47.676723    1246 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/c088e024-da99-4099-be32-33c2e95ff5bc-script\") pod \"helper-pod-delete-pvc-3e9f07d3-7c03-4974-9a62-4df0aaccce94\" (UID: \"c088e024-da99-4099-be32-33c2e95ff5bc\") " pod="local-path-storage/helper-pod-delete-pvc-3e9f07d3-7c03-4974-9a62-4df0aaccce94"
	Apr 29 12:28:47 addons-051772 kubelet[1246]: I0429 12:28:47.676743    1246 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/c088e024-da99-4099-be32-33c2e95ff5bc-gcp-creds\") pod \"helper-pod-delete-pvc-3e9f07d3-7c03-4974-9a62-4df0aaccce94\" (UID: \"c088e024-da99-4099-be32-33c2e95ff5bc\") " pod="local-path-storage/helper-pod-delete-pvc-3e9f07d3-7c03-4974-9a62-4df0aaccce94"
	Apr 29 12:28:47 addons-051772 kubelet[1246]: I0429 12:28:47.676769    1246 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbbj7\" (UniqueName: \"kubernetes.io/projected/c088e024-da99-4099-be32-33c2e95ff5bc-kube-api-access-gbbj7\") pod \"helper-pod-delete-pvc-3e9f07d3-7c03-4974-9a62-4df0aaccce94\" (UID: \"c088e024-da99-4099-be32-33c2e95ff5bc\") " pod="local-path-storage/helper-pod-delete-pvc-3e9f07d3-7c03-4974-9a62-4df0aaccce94"
	Apr 29 12:28:47 addons-051772 kubelet[1246]: I0429 12:28:47.885321    1246 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab6822d120e0808af99c5174c42d49fd41cedf0946160bf6ef0345e689d0827e"
	Apr 29 12:28:48 addons-051772 kubelet[1246]: I0429 12:28:48.477767    1246 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03a0c5cb-d2ea-43a8-a9ae-4f6aa00086e6" path="/var/lib/kubelet/pods/03a0c5cb-d2ea-43a8-a9ae-4f6aa00086e6/volumes"
	
	
	==> storage-provisioner [30ab8201e58a06e0917a2a251979d1b3f2c028e7d1f218af4c76921db4ce174c] <==
	I0429 12:25:36.431606       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0429 12:25:36.597318       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0429 12:25:36.597366       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0429 12:25:36.744534       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0429 12:25:36.744780       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"174605dc-3b30-47a6-a515-12870e5cec37", APIVersion:"v1", ResourceVersion:"676", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-051772_b3c7fcda-22ef-456f-9da5-e9b7cd47444d became leader
	I0429 12:25:36.746027       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-051772_b3c7fcda-22ef-456f-9da5-e9b7cd47444d!
	I0429 12:25:37.050724       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-051772_b3c7fcda-22ef-456f-9da5-e9b7cd47444d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-051772 -n addons-051772
helpers_test.go:261: (dbg) Run:  kubectl --context addons-051772 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: helper-pod-delete-pvc-3e9f07d3-7c03-4974-9a62-4df0aaccce94
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/NvidiaDevicePlugin]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-051772 describe pod helper-pod-delete-pvc-3e9f07d3-7c03-4974-9a62-4df0aaccce94
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-051772 describe pod helper-pod-delete-pvc-3e9f07d3-7c03-4974-9a62-4df0aaccce94: exit status 1 (61.423821ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "helper-pod-delete-pvc-3e9f07d3-7c03-4974-9a62-4df0aaccce94" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-051772 describe pod helper-pod-delete-pvc-3e9f07d3-7c03-4974-9a62-4df0aaccce94: exit status 1
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (9.02s)

                                                
                                    

Test pass (288/325)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 47.48
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.0/json-events 18.1
13 TestDownloadOnly/v1.30.0/preload-exists 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.07
18 TestDownloadOnly/v1.30.0/DeleteAll 0.14
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.57
22 TestOffline 93.54
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 217.79
29 TestAddons/parallel/Registry 22.64
30 TestAddons/parallel/Ingress 29.05
31 TestAddons/parallel/InspektorGadget 11.86
32 TestAddons/parallel/MetricsServer 7.18
33 TestAddons/parallel/HelmTiller 23.43
35 TestAddons/parallel/CSI 69.86
36 TestAddons/parallel/Headlamp 14.97
37 TestAddons/parallel/CloudSpanner 6.58
38 TestAddons/parallel/LocalPath 63.51
40 TestAddons/parallel/Yakd 5.01
43 TestAddons/serial/GCPAuth/Namespaces 0.11
44 TestAddons/StoppedEnableDisable 92.74
45 TestCertOptions 95.19
46 TestCertExpiration 260.5
48 TestForceSystemdFlag 82.78
49 TestForceSystemdEnv 47.62
51 TestKVMDriverInstallOrUpdate 8.46
55 TestErrorSpam/setup 45.86
56 TestErrorSpam/start 0.38
57 TestErrorSpam/status 0.81
58 TestErrorSpam/pause 1.65
59 TestErrorSpam/unpause 1.7
60 TestErrorSpam/stop 4.62
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 98.38
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 45
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.07
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.58
72 TestFunctional/serial/CacheCmd/cache/add_local 3.3
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
74 TestFunctional/serial/CacheCmd/cache/list 0.06
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.78
77 TestFunctional/serial/CacheCmd/cache/delete 0.12
78 TestFunctional/serial/MinikubeKubectlCmd 0.12
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
80 TestFunctional/serial/ExtraConfig 40.87
81 TestFunctional/serial/ComponentHealth 0.06
82 TestFunctional/serial/LogsCmd 1.62
83 TestFunctional/serial/LogsFileCmd 1.51
84 TestFunctional/serial/InvalidService 4.33
86 TestFunctional/parallel/ConfigCmd 0.38
87 TestFunctional/parallel/DashboardCmd 15.47
88 TestFunctional/parallel/DryRun 0.29
89 TestFunctional/parallel/InternationalLanguage 0.15
90 TestFunctional/parallel/StatusCmd 0.87
94 TestFunctional/parallel/ServiceCmdConnect 13.84
95 TestFunctional/parallel/AddonsCmd 0.17
96 TestFunctional/parallel/PersistentVolumeClaim 49.36
98 TestFunctional/parallel/SSHCmd 0.4
99 TestFunctional/parallel/CpCmd 1.33
100 TestFunctional/parallel/MySQL 27.46
101 TestFunctional/parallel/FileSync 0.22
102 TestFunctional/parallel/CertSync 1.32
106 TestFunctional/parallel/NodeLabels 0.06
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.47
110 TestFunctional/parallel/License 0.8
120 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
121 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
122 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
123 TestFunctional/parallel/ImageCommands/ImageListYaml 0.32
124 TestFunctional/parallel/ImageCommands/ImageBuild 5.18
125 TestFunctional/parallel/ImageCommands/Setup 3.26
126 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
127 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
128 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
129 TestFunctional/parallel/Version/short 0.06
130 TestFunctional/parallel/Version/components 0.72
131 TestFunctional/parallel/MountCmd/any-port 20.7
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.93
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.15
135 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.56
136 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
137 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.74
138 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.56
139 TestFunctional/parallel/MountCmd/specific-port 2.13
140 TestFunctional/parallel/MountCmd/VerifyCleanup 1.78
141 TestFunctional/parallel/ServiceCmd/DeployApp 11.43
142 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
143 TestFunctional/parallel/ProfileCmd/profile_list 0.34
144 TestFunctional/parallel/ProfileCmd/profile_json_output 0.29
145 TestFunctional/parallel/ServiceCmd/List 1.24
146 TestFunctional/parallel/ServiceCmd/JSONOutput 1.37
147 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
148 TestFunctional/parallel/ServiceCmd/Format 0.31
149 TestFunctional/parallel/ServiceCmd/URL 0.3
150 TestFunctional/delete_addon-resizer_images 0.07
151 TestFunctional/delete_my-image_image 0.01
152 TestFunctional/delete_minikube_cached_images 0.01
156 TestMultiControlPlane/serial/StartCluster 221
157 TestMultiControlPlane/serial/DeployApp 8.03
158 TestMultiControlPlane/serial/PingHostFromPods 1.42
159 TestMultiControlPlane/serial/AddWorkerNode 50.98
160 TestMultiControlPlane/serial/NodeLabels 0.07
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.55
162 TestMultiControlPlane/serial/CopyFile 13.8
163 TestMultiControlPlane/serial/StopSecondaryNode 93.16
164 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.41
165 TestMultiControlPlane/serial/RestartSecondaryNode 45.44
166 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.58
167 TestMultiControlPlane/serial/RestartClusterKeepsNodes 450.51
168 TestMultiControlPlane/serial/DeleteSecondaryNode 8.22
169 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.4
170 TestMultiControlPlane/serial/StopCluster 276.42
171 TestMultiControlPlane/serial/RestartCluster 162.7
172 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.39
173 TestMultiControlPlane/serial/AddSecondaryNode 69.59
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.57
178 TestJSONOutput/start/Command 103.69
179 TestJSONOutput/start/Audit 0
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 0.78
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 0.69
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 7.35
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 0.22
206 TestMainNoArgs 0.06
207 TestMinikubeProfile 101.02
210 TestMountStart/serial/StartWithMountFirst 29.41
211 TestMountStart/serial/VerifyMountFirst 0.4
212 TestMountStart/serial/StartWithMountSecond 29.86
213 TestMountStart/serial/VerifyMountSecond 0.4
214 TestMountStart/serial/DeleteFirst 0.65
215 TestMountStart/serial/VerifyMountPostDelete 0.39
216 TestMountStart/serial/Stop 2.29
217 TestMountStart/serial/RestartStopped 25.02
218 TestMountStart/serial/VerifyMountPostStop 0.41
221 TestMultiNode/serial/FreshStart2Nodes 107.08
222 TestMultiNode/serial/DeployApp2Nodes 6.01
223 TestMultiNode/serial/PingHostFrom2Pods 0.88
224 TestMultiNode/serial/AddNode 42.62
225 TestMultiNode/serial/MultiNodeLabels 0.06
226 TestMultiNode/serial/ProfileList 0.22
227 TestMultiNode/serial/CopyFile 7.63
228 TestMultiNode/serial/StopNode 2.36
229 TestMultiNode/serial/StartAfterStop 27.26
230 TestMultiNode/serial/RestartKeepsNodes 295.78
231 TestMultiNode/serial/DeleteNode 2.42
232 TestMultiNode/serial/StopMultiNode 184.11
233 TestMultiNode/serial/RestartMultiNode 79.16
234 TestMultiNode/serial/ValidateNameConflict 46.4
239 TestPreload 398.83
241 TestScheduledStopUnix 116.96
245 TestRunningBinaryUpgrade 191.32
247 TestKubernetesUpgrade 245.61
253 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
254 TestNoKubernetes/serial/StartWithK8s 102.14
259 TestNetworkPlugins/group/false 3.37
263 TestStoppedBinaryUpgrade/Setup 3.72
264 TestStoppedBinaryUpgrade/Upgrade 216.96
265 TestNoKubernetes/serial/StartWithStopK8s 67.48
266 TestNoKubernetes/serial/Start 59.13
267 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
268 TestNoKubernetes/serial/ProfileList 1.69
269 TestNoKubernetes/serial/Stop 2.07
270 TestNoKubernetes/serial/StartNoArgs 26.74
279 TestPause/serial/Start 64.99
280 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
281 TestStoppedBinaryUpgrade/MinikubeLogs 1.08
282 TestPause/serial/SecondStartNoReconfiguration 75.62
283 TestNetworkPlugins/group/auto/Start 67.47
284 TestNetworkPlugins/group/kindnet/Start 70.97
285 TestPause/serial/Pause 0.88
286 TestPause/serial/VerifyStatus 0.31
287 TestPause/serial/Unpause 0.68
288 TestPause/serial/PauseAgain 0.89
289 TestPause/serial/DeletePaused 1.06
290 TestPause/serial/VerifyDeletedResources 1.13
291 TestNetworkPlugins/group/calico/Start 106.9
292 TestNetworkPlugins/group/auto/KubeletFlags 0.24
293 TestNetworkPlugins/group/auto/NetCatPod 9.28
294 TestNetworkPlugins/group/auto/DNS 0.17
295 TestNetworkPlugins/group/auto/Localhost 0.15
296 TestNetworkPlugins/group/auto/HairPin 0.13
297 TestNetworkPlugins/group/custom-flannel/Start 87.97
298 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
299 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
300 TestNetworkPlugins/group/kindnet/NetCatPod 9.27
301 TestNetworkPlugins/group/kindnet/DNS 0.16
302 TestNetworkPlugins/group/kindnet/Localhost 0.15
303 TestNetworkPlugins/group/kindnet/HairPin 0.14
304 TestNetworkPlugins/group/enable-default-cni/Start 106.8
305 TestNetworkPlugins/group/calico/ControllerPod 6.01
306 TestNetworkPlugins/group/calico/KubeletFlags 0.24
307 TestNetworkPlugins/group/calico/NetCatPod 13.29
308 TestNetworkPlugins/group/calico/DNS 0.24
309 TestNetworkPlugins/group/calico/Localhost 0.14
310 TestNetworkPlugins/group/calico/HairPin 0.14
311 TestNetworkPlugins/group/flannel/Start 86.04
312 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
313 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.29
314 TestNetworkPlugins/group/custom-flannel/DNS 0.18
315 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
316 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
317 TestNetworkPlugins/group/bridge/Start 78.65
319 TestStartStop/group/old-k8s-version/serial/FirstStart 175.02
320 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
321 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.02
322 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
323 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
324 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
325 TestNetworkPlugins/group/flannel/ControllerPod 5.08
327 TestStartStop/group/no-preload/serial/FirstStart 117.98
328 TestNetworkPlugins/group/flannel/KubeletFlags 0.41
329 TestNetworkPlugins/group/flannel/NetCatPod 11.18
330 TestNetworkPlugins/group/bridge/KubeletFlags 0.25
331 TestNetworkPlugins/group/bridge/NetCatPod 10.3
332 TestNetworkPlugins/group/flannel/DNS 0.17
333 TestNetworkPlugins/group/flannel/Localhost 0.15
334 TestNetworkPlugins/group/flannel/HairPin 0.18
335 TestNetworkPlugins/group/bridge/DNS 0.2
336 TestNetworkPlugins/group/bridge/Localhost 0.17
337 TestNetworkPlugins/group/bridge/HairPin 0.16
339 TestStartStop/group/embed-certs/serial/FirstStart 66.7
341 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 128.85
342 TestStartStop/group/embed-certs/serial/DeployApp 10.34
343 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.13
344 TestStartStop/group/embed-certs/serial/Stop 92.46
345 TestStartStop/group/no-preload/serial/DeployApp 10.33
346 TestStartStop/group/old-k8s-version/serial/DeployApp 11.52
347 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.07
348 TestStartStop/group/no-preload/serial/Stop 92.54
349 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.01
350 TestStartStop/group/old-k8s-version/serial/Stop 92.51
351 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.3
352 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.16
353 TestStartStop/group/default-k8s-diff-port/serial/Stop 92.52
354 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
355 TestStartStop/group/embed-certs/serial/SecondStart 297.24
356 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
357 TestStartStop/group/no-preload/serial/SecondStart 298.86
358 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
359 TestStartStop/group/old-k8s-version/serial/SecondStart 550.6
360 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
361 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 300.99
362 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
363 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
364 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
365 TestStartStop/group/embed-certs/serial/Pause 2.93
367 TestStartStop/group/newest-cni/serial/FirstStart 59.21
368 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
369 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
370 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
371 TestStartStop/group/no-preload/serial/Pause 2.86
372 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
373 TestStartStop/group/newest-cni/serial/DeployApp 0
374 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.39
375 TestStartStop/group/newest-cni/serial/Stop 2.34
376 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
377 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
378 TestStartStop/group/newest-cni/serial/SecondStart 33.11
379 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
380 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.88
381 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
382 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
383 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
384 TestStartStop/group/newest-cni/serial/Pause 2.6
385 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
386 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
387 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
388 TestStartStop/group/old-k8s-version/serial/Pause 2.64
x
+
TestDownloadOnly/v1.20.0/json-events (47.48s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-726309 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-726309 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (47.475318986s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (47.48s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-726309
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-726309: exit status 85 (72.64529ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-726309 | jenkins | v1.33.0 | 29 Apr 24 12:23 UTC |          |
	|         | -p download-only-726309        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 12:23:20
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 12:23:20.191322   90039 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:23:20.191494   90039 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:23:20.191504   90039 out.go:304] Setting ErrFile to fd 2...
	I0429 12:23:20.191508   90039 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:23:20.191763   90039 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18771-82690/.minikube/bin
	W0429 12:23:20.191901   90039 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18771-82690/.minikube/config/config.json: open /home/jenkins/minikube-integration/18771-82690/.minikube/config/config.json: no such file or directory
	I0429 12:23:20.192492   90039 out.go:298] Setting JSON to true
	I0429 12:23:20.193370   90039 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7544,"bootTime":1714385856,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 12:23:20.193473   90039 start.go:139] virtualization: kvm guest
	I0429 12:23:20.196317   90039 out.go:97] [download-only-726309] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 12:23:20.198100   90039 out.go:169] MINIKUBE_LOCATION=18771
	W0429 12:23:20.196515   90039 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18771-82690/.minikube/cache/preloaded-tarball: no such file or directory
	I0429 12:23:20.196526   90039 notify.go:220] Checking for updates...
	I0429 12:23:20.201204   90039 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 12:23:20.202579   90039 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18771-82690/kubeconfig
	I0429 12:23:20.203896   90039 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18771-82690/.minikube
	I0429 12:23:20.205249   90039 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0429 12:23:20.207762   90039 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0429 12:23:20.208018   90039 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 12:23:20.243546   90039 out.go:97] Using the kvm2 driver based on user configuration
	I0429 12:23:20.243574   90039 start.go:297] selected driver: kvm2
	I0429 12:23:20.243583   90039 start.go:901] validating driver "kvm2" against <nil>
	I0429 12:23:20.243987   90039 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 12:23:20.244119   90039 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18771-82690/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 12:23:20.260277   90039 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 12:23:20.260342   90039 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 12:23:20.260951   90039 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0429 12:23:20.261112   90039 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0429 12:23:20.261176   90039 cni.go:84] Creating CNI manager for ""
	I0429 12:23:20.261190   90039 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0429 12:23:20.261199   90039 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 12:23:20.261258   90039 start.go:340] cluster config:
	{Name:download-only-726309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-726309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:23:20.261441   90039 iso.go:125] acquiring lock: {Name:mkedacf31368d400e657fc8150aebe85f02fab3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 12:23:20.263129   90039 out.go:97] Downloading VM boot image ...
	I0429 12:23:20.263202   90039 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18771-82690/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0429 12:23:37.486561   90039 out.go:97] Starting "download-only-726309" primary control-plane node in "download-only-726309" cluster
	I0429 12:23:37.486593   90039 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0429 12:23:37.652307   90039 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0429 12:23:37.652353   90039 cache.go:56] Caching tarball of preloaded images
	I0429 12:23:37.652559   90039 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0429 12:23:37.654444   90039 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0429 12:23:37.654467   90039 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0429 12:23:37.810341   90039 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/18771-82690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0429 12:23:58.775701   90039 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0429 12:23:58.775793   90039 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18771-82690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0429 12:23:59.649009   90039 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0429 12:23:59.649366   90039 profile.go:143] Saving config to /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/download-only-726309/config.json ...
	I0429 12:23:59.649399   90039 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/download-only-726309/config.json: {Name:mkec0f4df7fa049ea77491b1e77572753cce90b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:23:59.649579   90039 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0429 12:23:59.649808   90039 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18771-82690/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-726309 host does not exist
	  To start a cluster, run: "minikube start -p download-only-726309"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-726309
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (18.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-351879 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-351879 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (18.094872161s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (18.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-351879
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-351879: exit status 85 (69.447739ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-726309 | jenkins | v1.33.0 | 29 Apr 24 12:23 UTC |                     |
	|         | -p download-only-726309        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0 | 29 Apr 24 12:24 UTC | 29 Apr 24 12:24 UTC |
	| delete  | -p download-only-726309        | download-only-726309 | jenkins | v1.33.0 | 29 Apr 24 12:24 UTC | 29 Apr 24 12:24 UTC |
	| start   | -o=json --download-only        | download-only-351879 | jenkins | v1.33.0 | 29 Apr 24 12:24 UTC |                     |
	|         | -p download-only-351879        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 12:24:07
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 12:24:07.998398   90345 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:24:07.998510   90345 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:24:07.998519   90345 out.go:304] Setting ErrFile to fd 2...
	I0429 12:24:07.998522   90345 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:24:07.998667   90345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18771-82690/.minikube/bin
	I0429 12:24:07.999212   90345 out.go:298] Setting JSON to true
	I0429 12:24:08.000042   90345 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7592,"bootTime":1714385856,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 12:24:08.000098   90345 start.go:139] virtualization: kvm guest
	I0429 12:24:08.002358   90345 out.go:97] [download-only-351879] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 12:24:08.003821   90345 out.go:169] MINIKUBE_LOCATION=18771
	I0429 12:24:08.002527   90345 notify.go:220] Checking for updates...
	I0429 12:24:08.006480   90345 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 12:24:08.007909   90345 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18771-82690/kubeconfig
	I0429 12:24:08.009193   90345 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18771-82690/.minikube
	I0429 12:24:08.010384   90345 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0429 12:24:08.012824   90345 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0429 12:24:08.013102   90345 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 12:24:08.045481   90345 out.go:97] Using the kvm2 driver based on user configuration
	I0429 12:24:08.045519   90345 start.go:297] selected driver: kvm2
	I0429 12:24:08.045524   90345 start.go:901] validating driver "kvm2" against <nil>
	I0429 12:24:08.045833   90345 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 12:24:08.045903   90345 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18771-82690/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 12:24:08.060722   90345 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 12:24:08.060780   90345 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 12:24:08.061260   90345 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0429 12:24:08.061432   90345 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0429 12:24:08.061499   90345 cni.go:84] Creating CNI manager for ""
	I0429 12:24:08.061513   90345 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0429 12:24:08.061524   90345 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 12:24:08.061589   90345 start.go:340] cluster config:
	{Name:download-only-351879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-351879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:24:08.061676   90345 iso.go:125] acquiring lock: {Name:mkedacf31368d400e657fc8150aebe85f02fab3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 12:24:08.063292   90345 out.go:97] Starting "download-only-351879" primary control-plane node in "download-only-351879" cluster
	I0429 12:24:08.063326   90345 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime containerd
	I0429 12:24:08.712572   90345 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-containerd-overlay2-amd64.tar.lz4
	I0429 12:24:08.712616   90345 cache.go:56] Caching tarball of preloaded images
	I0429 12:24:08.712797   90345 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime containerd
	I0429 12:24:08.714678   90345 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0429 12:24:08.714710   90345 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-containerd-overlay2-amd64.tar.lz4 ...
	I0429 12:24:08.867278   90345 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:3a7aac5052a5448f24921f55001543e6 -> /home/jenkins/minikube-integration/18771-82690/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-351879 host does not exist
	  To start a cluster, run: "minikube start -p download-only-351879"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-351879
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-643325 --alsologtostderr --binary-mirror http://127.0.0.1:36767 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-643325" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-643325
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (93.54s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-525758 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-525758 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m32.508298072s)
helpers_test.go:175: Cleaning up "offline-containerd-525758" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-525758
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-525758: (1.035934433s)
--- PASS: TestOffline (93.54s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-051772
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-051772: exit status 85 (63.525115ms)

                                                
                                                
-- stdout --
	* Profile "addons-051772" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-051772"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-051772
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-051772: exit status 85 (62.190989ms)

                                                
                                                
-- stdout --
	* Profile "addons-051772" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-051772"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (217.79s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-051772 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-051772 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m37.792069283s)
--- PASS: TestAddons/Setup (217.79s)

                                                
                                    
x
+
TestAddons/parallel/Registry (22.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 11.301223ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-jlgc4" [9d9bc9f0-f92a-4894-917d-18a54af96e8f] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.013104534s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-4hmwj" [f42027ec-8780-4684-9de4-696063e93160] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004763328s
addons_test.go:340: (dbg) Run:  kubectl --context addons-051772 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-051772 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-051772 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (9.792701112s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-051772 ip
2024/04/29 12:28:27 [DEBUG] GET http://192.168.39.38:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-051772 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (22.64s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (29.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-051772 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-051772 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-051772 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [80a179ca-84f1-417a-87a4-055eb74d82d8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [80a179ca-84f1-417a-87a4-055eb74d82d8] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 19.004951188s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-051772 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-051772 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-051772 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.38
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-051772 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-051772 addons disable ingress-dns --alsologtostderr -v=1: (1.091691801s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-051772 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-051772 addons disable ingress --alsologtostderr -v=1: (7.786240216s)
--- PASS: TestAddons/parallel/Ingress (29.05s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.86s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-slg9k" [41eebc86-a0a4-4edd-9df1-31946b3674e4] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004480694s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-051772
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-051772: (5.855442438s)
--- PASS: TestAddons/parallel/InspektorGadget (11.86s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.18s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 11.45892ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-52v7v" [00789e18-96fd-48cf-aac2-1dff3b7046c1] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.012962066s
addons_test.go:415: (dbg) Run:  kubectl --context addons-051772 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-051772 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-linux-amd64 -p addons-051772 addons disable metrics-server --alsologtostderr -v=1: (1.083370416s)
--- PASS: TestAddons/parallel/MetricsServer (7.18s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (23.43s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.463754ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-f4xt8" [9a11663a-a4d0-4dbe-80d4-5a21e11bde15] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.005728695s
addons_test.go:473: (dbg) Run:  kubectl --context addons-051772 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-051772 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (16.755029613s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-051772 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (23.43s)

                                                
                                    
x
+
TestAddons/parallel/CSI (69.86s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 21.521567ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-051772 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-051772 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [b75b76db-cff2-4313-89ad-c1e20adf0e3a] Pending
helpers_test.go:344: "task-pv-pod" [b75b76db-cff2-4313-89ad-c1e20adf0e3a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [b75b76db-cff2-4313-89ad-c1e20adf0e3a] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.003855007s
addons_test.go:584: (dbg) Run:  kubectl --context addons-051772 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-051772 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-051772 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-051772 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-051772 delete pod task-pv-pod: (1.266682259s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-051772 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-051772 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-051772 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c2f3fc84-433a-4e52-a94e-d45141e96b26] Pending
helpers_test.go:344: "task-pv-pod-restore" [c2f3fc84-433a-4e52-a94e-d45141e96b26] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c2f3fc84-433a-4e52-a94e-d45141e96b26] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 15.004240766s
addons_test.go:626: (dbg) Run:  kubectl --context addons-051772 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-051772 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-051772 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-051772 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-051772 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.771135558s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-051772 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (69.86s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-051772 --alsologtostderr -v=1
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7559bf459f-9pfrn" [9756e182-88d9-44ea-bc25-58cec11379a7] Pending
helpers_test.go:344: "headlamp-7559bf459f-9pfrn" [9756e182-88d9-44ea-bc25-58cec11379a7] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7559bf459f-9pfrn" [9756e182-88d9-44ea-bc25-58cec11379a7] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.005004163s
--- PASS: TestAddons/parallel/Headlamp (14.97s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-8677549d7-gsd8s" [f8cb4881-948e-43df-b397-b80f249040ef] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004342753s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-051772
--- PASS: TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (63.51s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-051772 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-051772 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051772 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [03a0c5cb-d2ea-43a8-a9ae-4f6aa00086e6] Pending
helpers_test.go:344: "test-local-path" [03a0c5cb-d2ea-43a8-a9ae-4f6aa00086e6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [03a0c5cb-d2ea-43a8-a9ae-4f6aa00086e6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [03a0c5cb-d2ea-43a8-a9ae-4f6aa00086e6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 8.007333916s
addons_test.go:891: (dbg) Run:  kubectl --context addons-051772 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-051772 ssh "cat /opt/local-path-provisioner/pvc-3e9f07d3-7c03-4974-9a62-4df0aaccce94_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-051772 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-051772 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-051772 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-051772 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.594020644s)
--- PASS: TestAddons/parallel/LocalPath (63.51s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-5mxzv" [99f5bb8d-4199-432a-a075-2efbe5fb376a] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005356693s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-051772 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-051772 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.74s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-051772
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-051772: (1m32.434030219s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-051772
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-051772
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-051772
--- PASS: TestAddons/StoppedEnableDisable (92.74s)

                                                
                                    
x
+
TestCertOptions (95.19s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-439534 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-439534 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m33.656413893s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-439534 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-439534 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-439534 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-439534" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-439534
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-439534: (1.028204798s)
--- PASS: TestCertOptions (95.19s)

                                                
                                    
x
+
TestCertExpiration (260.5s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-682435 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-682435 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (56.945749951s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-682435 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-682435 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (22.743581642s)
helpers_test.go:175: Cleaning up "cert-expiration-682435" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-682435
--- PASS: TestCertExpiration (260.50s)

                                                
                                    
x
+
TestForceSystemdFlag (82.78s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-832848 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-832848 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m21.762236446s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-832848 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-832848" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-832848
--- PASS: TestForceSystemdFlag (82.78s)

                                                
                                    
x
+
TestForceSystemdEnv (47.62s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-579986 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-579986 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (46.407447538s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-579986 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-579986" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-579986
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-579986: (1.008508667s)
--- PASS: TestForceSystemdEnv (47.62s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (8.46s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (8.46s)

                                                
                                    
x
+
TestErrorSpam/setup (45.86s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-163421 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-163421 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-163421 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-163421 --driver=kvm2  --container-runtime=containerd: (45.859896091s)
--- PASS: TestErrorSpam/setup (45.86s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-163421 --log_dir /tmp/nospam-163421 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-163421 --log_dir /tmp/nospam-163421 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-163421 --log_dir /tmp/nospam-163421 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-163421 --log_dir /tmp/nospam-163421 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-163421 --log_dir /tmp/nospam-163421 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-163421 --log_dir /tmp/nospam-163421 status
--- PASS: TestErrorSpam/status (0.81s)

                                                
                                    
x
+
TestErrorSpam/pause (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-163421 --log_dir /tmp/nospam-163421 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-163421 --log_dir /tmp/nospam-163421 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-163421 --log_dir /tmp/nospam-163421 pause
--- PASS: TestErrorSpam/pause (1.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-163421 --log_dir /tmp/nospam-163421 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-163421 --log_dir /tmp/nospam-163421 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-163421 --log_dir /tmp/nospam-163421 unpause
--- PASS: TestErrorSpam/unpause (1.70s)

                                                
                                    
x
+
TestErrorSpam/stop (4.62s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-163421 --log_dir /tmp/nospam-163421 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-163421 --log_dir /tmp/nospam-163421 stop: (1.597739925s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-163421 --log_dir /tmp/nospam-163421 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-163421 --log_dir /tmp/nospam-163421 stop: (1.507725363s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-163421 --log_dir /tmp/nospam-163421 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-163421 --log_dir /tmp/nospam-163421 stop: (1.509584901s)
--- PASS: TestErrorSpam/stop (4.62s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18771-82690/.minikube/files/etc/test/nested/copy/90027/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (98.38s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-955425 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0429 12:33:05.070896   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt: no such file or directory
E0429 12:33:05.076698   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt: no such file or directory
E0429 12:33:05.086974   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt: no such file or directory
E0429 12:33:05.107295   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt: no such file or directory
E0429 12:33:05.147743   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt: no such file or directory
E0429 12:33:05.228092   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt: no such file or directory
E0429 12:33:05.388535   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt: no such file or directory
E0429 12:33:05.709123   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt: no such file or directory
E0429 12:33:06.350076   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt: no such file or directory
E0429 12:33:07.630673   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt: no such file or directory
E0429 12:33:10.191562   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt: no such file or directory
E0429 12:33:15.311800   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt: no such file or directory
E0429 12:33:25.552398   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt: no such file or directory
E0429 12:33:46.033491   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-955425 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m38.384405417s)
--- PASS: TestFunctional/serial/StartWithProxy (98.38s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (45s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-955425 --alsologtostderr -v=8
E0429 12:34:26.993942   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-955425 --alsologtostderr -v=8: (45.003664717s)
functional_test.go:659: soft start took 45.004432592s for "functional-955425" cluster.
--- PASS: TestFunctional/serial/SoftStart (45.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-955425 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-955425 cache add registry.k8s.io/pause:3.1: (1.175367658s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-955425 cache add registry.k8s.io/pause:3.3: (1.241913022s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-955425 cache add registry.k8s.io/pause:latest: (1.164103914s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (3.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-955425 /tmp/TestFunctionalserialCacheCmdcacheadd_local595582454/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 cache add minikube-local-cache-test:functional-955425
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-955425 cache add minikube-local-cache-test:functional-955425: (2.941063453s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 cache delete minikube-local-cache-test:functional-955425
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-955425
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (3.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.78s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-955425 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (215.535915ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-955425 cache reload: (1.090823205s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.78s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 kubectl -- --context functional-955425 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-955425 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.87s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-955425 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-955425 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.868107387s)
functional_test.go:757: restart took 40.868274623s for "functional-955425" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.87s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-955425 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.62s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-955425 logs: (1.617485059s)
--- PASS: TestFunctional/serial/LogsCmd (1.62s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 logs --file /tmp/TestFunctionalserialLogsFileCmd2270426867/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-955425 logs --file /tmp/TestFunctionalserialLogsFileCmd2270426867/001/logs.txt: (1.508098483s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.33s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-955425 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-955425
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-955425: exit status 115 (293.487953ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.183:30119 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-955425 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.33s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-955425 config get cpus: exit status 14 (64.498562ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-955425 config get cpus: exit status 14 (59.128433ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-955425 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-955425 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 98906: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.47s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-955425 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-955425 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (141.623513ms)

                                                
                                                
-- stdout --
	* [functional-955425] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18771
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18771-82690/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18771-82690/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 12:36:09.171509   98660 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:36:09.171622   98660 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:36:09.171633   98660 out.go:304] Setting ErrFile to fd 2...
	I0429 12:36:09.171638   98660 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:36:09.171896   98660 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18771-82690/.minikube/bin
	I0429 12:36:09.172507   98660 out.go:298] Setting JSON to false
	I0429 12:36:09.173462   98660 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8313,"bootTime":1714385856,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 12:36:09.173525   98660 start.go:139] virtualization: kvm guest
	I0429 12:36:09.175588   98660 out.go:177] * [functional-955425] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 12:36:09.177443   98660 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 12:36:09.178762   98660 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 12:36:09.177463   98660 notify.go:220] Checking for updates...
	I0429 12:36:09.181316   98660 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18771-82690/kubeconfig
	I0429 12:36:09.182613   98660 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18771-82690/.minikube
	I0429 12:36:09.183924   98660 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 12:36:09.185148   98660 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 12:36:09.186868   98660 config.go:182] Loaded profile config "functional-955425": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0429 12:36:09.187255   98660 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:36:09.187294   98660 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:36:09.202078   98660 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36871
	I0429 12:36:09.202503   98660 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:36:09.203170   98660 main.go:141] libmachine: Using API Version  1
	I0429 12:36:09.203208   98660 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:36:09.203530   98660 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:36:09.203742   98660 main.go:141] libmachine: (functional-955425) Calling .DriverName
	I0429 12:36:09.203998   98660 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 12:36:09.204327   98660 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:36:09.204370   98660 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:36:09.219333   98660 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35817
	I0429 12:36:09.219900   98660 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:36:09.220417   98660 main.go:141] libmachine: Using API Version  1
	I0429 12:36:09.220446   98660 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:36:09.220820   98660 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:36:09.221017   98660 main.go:141] libmachine: (functional-955425) Calling .DriverName
	I0429 12:36:09.252192   98660 out.go:177] * Using the kvm2 driver based on existing profile
	I0429 12:36:09.253711   98660 start.go:297] selected driver: kvm2
	I0429 12:36:09.253729   98660 start.go:901] validating driver "kvm2" against &{Name:functional-955425 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:functional-955425 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:36:09.253884   98660 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 12:36:09.256477   98660 out.go:177] 
	W0429 12:36:09.257915   98660 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0429 12:36:09.259251   98660 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-955425 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-955425 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-955425 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (149.084956ms)

                                                
                                                
-- stdout --
	* [functional-955425] minikube v1.33.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18771
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18771-82690/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18771-82690/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 12:36:10.346442   98841 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:36:10.346596   98841 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:36:10.346609   98841 out.go:304] Setting ErrFile to fd 2...
	I0429 12:36:10.346617   98841 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:36:10.346957   98841 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18771-82690/.minikube/bin
	I0429 12:36:10.347541   98841 out.go:298] Setting JSON to false
	I0429 12:36:10.348520   98841 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8314,"bootTime":1714385856,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 12:36:10.348583   98841 start.go:139] virtualization: kvm guest
	I0429 12:36:10.350833   98841 out.go:177] * [functional-955425] minikube v1.33.0 sur Ubuntu 20.04 (kvm/amd64)
	I0429 12:36:10.352673   98841 notify.go:220] Checking for updates...
	I0429 12:36:10.352681   98841 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 12:36:10.354219   98841 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 12:36:10.355622   98841 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18771-82690/kubeconfig
	I0429 12:36:10.357055   98841 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18771-82690/.minikube
	I0429 12:36:10.358415   98841 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 12:36:10.359804   98841 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 12:36:10.361297   98841 config.go:182] Loaded profile config "functional-955425": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0429 12:36:10.361770   98841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:36:10.361834   98841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:36:10.377936   98841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33955
	I0429 12:36:10.378325   98841 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:36:10.378864   98841 main.go:141] libmachine: Using API Version  1
	I0429 12:36:10.378885   98841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:36:10.379187   98841 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:36:10.379371   98841 main.go:141] libmachine: (functional-955425) Calling .DriverName
	I0429 12:36:10.379606   98841 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 12:36:10.379940   98841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:36:10.379984   98841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:36:10.395015   98841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36887
	I0429 12:36:10.395432   98841 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:36:10.395949   98841 main.go:141] libmachine: Using API Version  1
	I0429 12:36:10.395973   98841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:36:10.396331   98841 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:36:10.396536   98841 main.go:141] libmachine: (functional-955425) Calling .DriverName
	I0429 12:36:10.427534   98841 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0429 12:36:10.428953   98841 start.go:297] selected driver: kvm2
	I0429 12:36:10.428967   98841 start.go:901] validating driver "kvm2" against &{Name:functional-955425 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:functional-955425 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:36:10.429062   98841 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 12:36:10.431296   98841 out.go:177] 
	W0429 12:36:10.432584   98841 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0429 12:36:10.433894   98841 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-955425 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-955425 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-xb294" [8d2b493c-e1b1-4266-97eb-686174b4f050] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-xb294" [8d2b493c-e1b1-4266-97eb-686174b4f050] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.005278446s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.183:31011
functional_test.go:1671: http://192.168.39.183:31011: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-xb294

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.183:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.183:31011
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (13.84s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (49.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [84ac8a30-13da-4824-b461-c15bb44e2db0] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004465896s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-955425 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-955425 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-955425 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-955425 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-955425 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-955425 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [891c6760-eb4a-4be9-a9ab-696a32d3b285] Pending
helpers_test.go:344: "sp-pod" [891c6760-eb4a-4be9-a9ab-696a32d3b285] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [891c6760-eb4a-4be9-a9ab-696a32d3b285] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.007135972s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-955425 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-955425 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-955425 delete -f testdata/storage-provisioner/pod.yaml: (1.455597138s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-955425 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [cb19ec06-65be-46d1-b7b7-9f7e1ea450d9] Pending
helpers_test.go:344: "sp-pod" [cb19ec06-65be-46d1-b7b7-9f7e1ea450d9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [cb19ec06-65be-46d1-b7b7-9f7e1ea450d9] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004161064s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-955425 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (49.36s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 ssh -n functional-955425 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 cp functional-955425:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3056269738/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 ssh -n functional-955425 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 ssh -n functional-955425 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-955425 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-79vq2" [50ab2ba8-39ed-480c-9f2f-ea23a9c26561] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-79vq2" [50ab2ba8-39ed-480c-9f2f-ea23a9c26561] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.014163539s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-955425 exec mysql-64454c8b5c-79vq2 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-955425 exec mysql-64454c8b5c-79vq2 -- mysql -ppassword -e "show databases;": exit status 1 (555.432003ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-955425 exec mysql-64454c8b5c-79vq2 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-955425 exec mysql-64454c8b5c-79vq2 -- mysql -ppassword -e "show databases;": exit status 1 (668.961077ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 104
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-955425 exec mysql-64454c8b5c-79vq2 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-955425 exec mysql-64454c8b5c-79vq2 -- mysql -ppassword -e "show databases;": exit status 1 (195.983349ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-955425 exec mysql-64454c8b5c-79vq2 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.46s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/90027/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 ssh "sudo cat /etc/test/nested/copy/90027/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/90027.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 ssh "sudo cat /etc/ssl/certs/90027.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/90027.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 ssh "sudo cat /usr/share/ca-certificates/90027.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/900272.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 ssh "sudo cat /etc/ssl/certs/900272.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/900272.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 ssh "sudo cat /usr/share/ca-certificates/900272.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-955425 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-955425 ssh "sudo systemctl is-active docker": exit status 1 (247.28015ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-955425 ssh "sudo systemctl is-active crio": exit status 1 (218.195162ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-955425 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-955425
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-955425
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-955425 image ls --format short --alsologtostderr:
I0429 12:36:19.944492   99184 out.go:291] Setting OutFile to fd 1 ...
I0429 12:36:19.944638   99184 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:36:19.944650   99184 out.go:304] Setting ErrFile to fd 2...
I0429 12:36:19.944655   99184 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:36:19.944922   99184 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18771-82690/.minikube/bin
I0429 12:36:19.946382   99184 config.go:182] Loaded profile config "functional-955425": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0429 12:36:19.946817   99184 config.go:182] Loaded profile config "functional-955425": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0429 12:36:19.947517   99184 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0429 12:36:19.947568   99184 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 12:36:19.962134   99184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36881
I0429 12:36:19.962611   99184 main.go:141] libmachine: () Calling .GetVersion
I0429 12:36:19.963183   99184 main.go:141] libmachine: Using API Version  1
I0429 12:36:19.963215   99184 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:36:19.963526   99184 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:36:19.963711   99184 main.go:141] libmachine: (functional-955425) Calling .GetState
I0429 12:36:19.965648   99184 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0429 12:36:19.965697   99184 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 12:36:19.979753   99184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43393
I0429 12:36:19.980202   99184 main.go:141] libmachine: () Calling .GetVersion
I0429 12:36:19.980649   99184 main.go:141] libmachine: Using API Version  1
I0429 12:36:19.980663   99184 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:36:19.980981   99184 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:36:19.981162   99184 main.go:141] libmachine: (functional-955425) Calling .DriverName
I0429 12:36:19.981367   99184 ssh_runner.go:195] Run: systemctl --version
I0429 12:36:19.981392   99184 main.go:141] libmachine: (functional-955425) Calling .GetSSHHostname
I0429 12:36:19.983925   99184 main.go:141] libmachine: (functional-955425) DBG | domain functional-955425 has defined MAC address 52:54:00:59:03:7d in network mk-functional-955425
I0429 12:36:19.984288   99184 main.go:141] libmachine: (functional-955425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:03:7d", ip: ""} in network mk-functional-955425: {Iface:virbr1 ExpiryTime:2024-04-29 13:32:34 +0000 UTC Type:0 Mac:52:54:00:59:03:7d Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:functional-955425 Clientid:01:52:54:00:59:03:7d}
I0429 12:36:19.984323   99184 main.go:141] libmachine: (functional-955425) DBG | domain functional-955425 has defined IP address 192.168.39.183 and MAC address 52:54:00:59:03:7d in network mk-functional-955425
I0429 12:36:19.984484   99184 main.go:141] libmachine: (functional-955425) Calling .GetSSHPort
I0429 12:36:19.984630   99184 main.go:141] libmachine: (functional-955425) Calling .GetSSHKeyPath
I0429 12:36:19.984756   99184 main.go:141] libmachine: (functional-955425) Calling .GetSSHUsername
I0429 12:36:19.984860   99184 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18771-82690/.minikube/machines/functional-955425/id_rsa Username:docker}
I0429 12:36:20.063368   99184 ssh_runner.go:195] Run: sudo crictl images --output json
I0429 12:36:20.111819   99184 main.go:141] libmachine: Making call to close driver server
I0429 12:36:20.111833   99184 main.go:141] libmachine: (functional-955425) Calling .Close
I0429 12:36:20.112137   99184 main.go:141] libmachine: (functional-955425) DBG | Closing plugin on server side
I0429 12:36:20.112168   99184 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:36:20.112184   99184 main.go:141] libmachine: Making call to close connection to plugin binary
I0429 12:36:20.112201   99184 main.go:141] libmachine: Making call to close driver server
I0429 12:36:20.112213   99184 main.go:141] libmachine: (functional-955425) Calling .Close
I0429 12:36:20.112459   99184 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:36:20.112475   99184 main.go:141] libmachine: Making call to close connection to plugin binary
I0429 12:36:20.112510   99184 main.go:141] libmachine: (functional-955425) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-955425 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.30.0            | sha256:a0bf55 | 29MB   |
| registry.k8s.io/kube-scheduler              | v1.30.0            | sha256:259c82 | 19.2MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| docker.io/kindest/kindnetd                  | v20240202-8f1494ea | sha256:4950bb | 27.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:cbb01a | 18.2MB |
| registry.k8s.io/kube-controller-manager     | v1.30.0            | sha256:c7aad4 | 31MB   |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| gcr.io/google-containers/addon-resizer      | functional-955425  | sha256:ffd4cf | 10.8MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/kube-apiserver              | v1.30.0            | sha256:c42f13 | 32.7MB |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| docker.io/library/minikube-local-cache-test | functional-955425  | sha256:1b9f60 | 991B   |
| docker.io/library/mysql                     | 5.7                | sha256:510733 | 138MB  |
| docker.io/library/nginx                     | latest             | sha256:7383c2 | 71MB   |
| registry.k8s.io/etcd                        | 3.5.12-0           | sha256:3861cf | 57.2MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-955425 image ls --format table --alsologtostderr:
I0429 12:36:21.982205   99419 out.go:291] Setting OutFile to fd 1 ...
I0429 12:36:21.982322   99419 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:36:21.982330   99419 out.go:304] Setting ErrFile to fd 2...
I0429 12:36:21.982337   99419 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:36:21.982535   99419 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18771-82690/.minikube/bin
I0429 12:36:21.983097   99419 config.go:182] Loaded profile config "functional-955425": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0429 12:36:21.983211   99419 config.go:182] Loaded profile config "functional-955425": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0429 12:36:21.983594   99419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0429 12:36:21.983643   99419 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 12:36:21.998110   99419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37887
I0429 12:36:21.998572   99419 main.go:141] libmachine: () Calling .GetVersion
I0429 12:36:21.999181   99419 main.go:141] libmachine: Using API Version  1
I0429 12:36:21.999254   99419 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:36:21.999596   99419 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:36:21.999793   99419 main.go:141] libmachine: (functional-955425) Calling .GetState
I0429 12:36:22.001508   99419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0429 12:36:22.001544   99419 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 12:36:22.015712   99419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46409
I0429 12:36:22.016122   99419 main.go:141] libmachine: () Calling .GetVersion
I0429 12:36:22.016573   99419 main.go:141] libmachine: Using API Version  1
I0429 12:36:22.016605   99419 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:36:22.016876   99419 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:36:22.017035   99419 main.go:141] libmachine: (functional-955425) Calling .DriverName
I0429 12:36:22.017243   99419 ssh_runner.go:195] Run: systemctl --version
I0429 12:36:22.017266   99419 main.go:141] libmachine: (functional-955425) Calling .GetSSHHostname
I0429 12:36:22.019643   99419 main.go:141] libmachine: (functional-955425) DBG | domain functional-955425 has defined MAC address 52:54:00:59:03:7d in network mk-functional-955425
I0429 12:36:22.020016   99419 main.go:141] libmachine: (functional-955425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:03:7d", ip: ""} in network mk-functional-955425: {Iface:virbr1 ExpiryTime:2024-04-29 13:32:34 +0000 UTC Type:0 Mac:52:54:00:59:03:7d Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:functional-955425 Clientid:01:52:54:00:59:03:7d}
I0429 12:36:22.020052   99419 main.go:141] libmachine: (functional-955425) DBG | domain functional-955425 has defined IP address 192.168.39.183 and MAC address 52:54:00:59:03:7d in network mk-functional-955425
I0429 12:36:22.020173   99419 main.go:141] libmachine: (functional-955425) Calling .GetSSHPort
I0429 12:36:22.020356   99419 main.go:141] libmachine: (functional-955425) Calling .GetSSHKeyPath
I0429 12:36:22.020519   99419 main.go:141] libmachine: (functional-955425) Calling .GetSSHUsername
I0429 12:36:22.020666   99419 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18771-82690/.minikube/machines/functional-955425/id_rsa Username:docker}
I0429 12:36:22.103521   99419 ssh_runner.go:195] Run: sudo crictl images --output json
I0429 12:36:22.179804   99419 main.go:141] libmachine: Making call to close driver server
I0429 12:36:22.179825   99419 main.go:141] libmachine: (functional-955425) Calling .Close
I0429 12:36:22.180161   99419 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:36:22.180226   99419 main.go:141] libmachine: Making call to close connection to plugin binary
I0429 12:36:22.180256   99419 main.go:141] libmachine: Making call to close driver server
I0429 12:36:22.180283   99419 main.go:141] libmachine: (functional-955425) Calling .Close
I0429 12:36:22.180255   99419 main.go:141] libmachine: (functional-955425) DBG | Closing plugin on server side
I0429 12:36:22.180546   99419 main.go:141] libmachine: (functional-955425) DBG | Closing plugin on server side
I0429 12:36:22.180555   99419 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:36:22.180569   99419 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-955425 image ls --format json --alsologtostderr:
[{"id":"sha256:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"27755257"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","repoDigests":["registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319d
b88d2c380d0210"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.0"],"size":"29020473"},{"id":"sha256:1b9f60d2294eff1c78a99e6f0c24476585af40612f21a1f019b26aea50fc77bd","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-955425"],"size":"991"},{"id":"sha256:7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759","repoDigests":["docker.io/library/nginx@sha256:ed6d2c43c8fbcd3eaa44c9dab6d94cb346234476230dc1681227aa72d07181ee"],"repoTags":["docker.io/library/nginx:latest"],"size":"70991807"},{"id":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"18182961"},{"id":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67"],"repoTags":["r
egistry.k8s.io/kube-scheduler:v1.30.0"],"size":"19208660"},{"id":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.0"],"size":"31030110"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:ffd4cfbbe753e62419
e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-955425"],"size":"10823156"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"57236178"},{"id":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","repoDigests":["registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0"],"size":"32663599"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d93
3","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-955425 image ls --format json --alsologtostderr:
I0429 12:36:21.755615   99395 out.go:291] Setting OutFile to fd 1 ...
I0429 12:36:21.756130   99395 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:36:21.756178   99395 out.go:304] Setting ErrFile to fd 2...
I0429 12:36:21.756195   99395 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:36:21.756651   99395 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18771-82690/.minikube/bin
I0429 12:36:21.757790   99395 config.go:182] Loaded profile config "functional-955425": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0429 12:36:21.757903   99395 config.go:182] Loaded profile config "functional-955425": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0429 12:36:21.758462   99395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0429 12:36:21.758515   99395 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 12:36:21.773092   99395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41593
I0429 12:36:21.773511   99395 main.go:141] libmachine: () Calling .GetVersion
I0429 12:36:21.774051   99395 main.go:141] libmachine: Using API Version  1
I0429 12:36:21.774075   99395 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:36:21.774389   99395 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:36:21.774570   99395 main.go:141] libmachine: (functional-955425) Calling .GetState
I0429 12:36:21.776324   99395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0429 12:36:21.776364   99395 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 12:36:21.790833   99395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46451
I0429 12:36:21.791304   99395 main.go:141] libmachine: () Calling .GetVersion
I0429 12:36:21.791902   99395 main.go:141] libmachine: Using API Version  1
I0429 12:36:21.791928   99395 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:36:21.792213   99395 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:36:21.792387   99395 main.go:141] libmachine: (functional-955425) Calling .DriverName
I0429 12:36:21.792570   99395 ssh_runner.go:195] Run: systemctl --version
I0429 12:36:21.792590   99395 main.go:141] libmachine: (functional-955425) Calling .GetSSHHostname
I0429 12:36:21.795238   99395 main.go:141] libmachine: (functional-955425) DBG | domain functional-955425 has defined MAC address 52:54:00:59:03:7d in network mk-functional-955425
I0429 12:36:21.795604   99395 main.go:141] libmachine: (functional-955425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:03:7d", ip: ""} in network mk-functional-955425: {Iface:virbr1 ExpiryTime:2024-04-29 13:32:34 +0000 UTC Type:0 Mac:52:54:00:59:03:7d Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:functional-955425 Clientid:01:52:54:00:59:03:7d}
I0429 12:36:21.795634   99395 main.go:141] libmachine: (functional-955425) DBG | domain functional-955425 has defined IP address 192.168.39.183 and MAC address 52:54:00:59:03:7d in network mk-functional-955425
I0429 12:36:21.795769   99395 main.go:141] libmachine: (functional-955425) Calling .GetSSHPort
I0429 12:36:21.795911   99395 main.go:141] libmachine: (functional-955425) Calling .GetSSHKeyPath
I0429 12:36:21.796079   99395 main.go:141] libmachine: (functional-955425) Calling .GetSSHUsername
I0429 12:36:21.796208   99395 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18771-82690/.minikube/machines/functional-955425/id_rsa Username:docker}
I0429 12:36:21.880498   99395 ssh_runner.go:195] Run: sudo crictl images --output json
I0429 12:36:21.925255   99395 main.go:141] libmachine: Making call to close driver server
I0429 12:36:21.925269   99395 main.go:141] libmachine: (functional-955425) Calling .Close
I0429 12:36:21.925566   99395 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:36:21.925584   99395 main.go:141] libmachine: (functional-955425) DBG | Closing plugin on server side
I0429 12:36:21.925592   99395 main.go:141] libmachine: Making call to close connection to plugin binary
I0429 12:36:21.925606   99395 main.go:141] libmachine: Making call to close driver server
I0429 12:36:21.925614   99395 main.go:141] libmachine: (functional-955425) Calling .Close
I0429 12:36:21.925878   99395 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:36:21.925912   99395 main.go:141] libmachine: Making call to close connection to plugin binary
I0429 12:36:21.925885   99395 main.go:141] libmachine: (functional-955425) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-955425 image ls --format yaml --alsologtostderr:
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:1b9f60d2294eff1c78a99e6f0c24476585af40612f21a1f019b26aea50fc77bd
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-955425
size: "991"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759
repoDigests:
- docker.io/library/nginx@sha256:ed6d2c43c8fbcd3eaa44c9dab6d94cb346234476230dc1681227aa72d07181ee
repoTags:
- docker.io/library/nginx:latest
size: "70991807"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "57236178"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-955425
size: "10823156"
- id: sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0
size: "32663599"
- id: sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b
repoDigests:
- registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0
size: "29020473"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0
size: "19208660"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "27755257"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "18182961"
- id: sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0
size: "31030110"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-955425 image ls --format yaml --alsologtostderr:
I0429 12:36:20.173293   99208 out.go:291] Setting OutFile to fd 1 ...
I0429 12:36:20.173429   99208 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:36:20.173439   99208 out.go:304] Setting ErrFile to fd 2...
I0429 12:36:20.173445   99208 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:36:20.173652   99208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18771-82690/.minikube/bin
I0429 12:36:20.174249   99208 config.go:182] Loaded profile config "functional-955425": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0429 12:36:20.174375   99208 config.go:182] Loaded profile config "functional-955425": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0429 12:36:20.174852   99208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0429 12:36:20.174898   99208 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 12:36:20.189425   99208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37007
I0429 12:36:20.189913   99208 main.go:141] libmachine: () Calling .GetVersion
I0429 12:36:20.190514   99208 main.go:141] libmachine: Using API Version  1
I0429 12:36:20.190541   99208 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:36:20.190873   99208 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:36:20.191079   99208 main.go:141] libmachine: (functional-955425) Calling .GetState
I0429 12:36:20.192919   99208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0429 12:36:20.192966   99208 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 12:36:20.206782   99208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44163
I0429 12:36:20.207180   99208 main.go:141] libmachine: () Calling .GetVersion
I0429 12:36:20.207691   99208 main.go:141] libmachine: Using API Version  1
I0429 12:36:20.207712   99208 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:36:20.208014   99208 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:36:20.208214   99208 main.go:141] libmachine: (functional-955425) Calling .DriverName
I0429 12:36:20.208450   99208 ssh_runner.go:195] Run: systemctl --version
I0429 12:36:20.208484   99208 main.go:141] libmachine: (functional-955425) Calling .GetSSHHostname
I0429 12:36:20.211003   99208 main.go:141] libmachine: (functional-955425) DBG | domain functional-955425 has defined MAC address 52:54:00:59:03:7d in network mk-functional-955425
I0429 12:36:20.211377   99208 main.go:141] libmachine: (functional-955425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:03:7d", ip: ""} in network mk-functional-955425: {Iface:virbr1 ExpiryTime:2024-04-29 13:32:34 +0000 UTC Type:0 Mac:52:54:00:59:03:7d Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:functional-955425 Clientid:01:52:54:00:59:03:7d}
I0429 12:36:20.211410   99208 main.go:141] libmachine: (functional-955425) DBG | domain functional-955425 has defined IP address 192.168.39.183 and MAC address 52:54:00:59:03:7d in network mk-functional-955425
I0429 12:36:20.211560   99208 main.go:141] libmachine: (functional-955425) Calling .GetSSHPort
I0429 12:36:20.211734   99208 main.go:141] libmachine: (functional-955425) Calling .GetSSHKeyPath
I0429 12:36:20.211903   99208 main.go:141] libmachine: (functional-955425) Calling .GetSSHUsername
I0429 12:36:20.212050   99208 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18771-82690/.minikube/machines/functional-955425/id_rsa Username:docker}
I0429 12:36:20.291078   99208 ssh_runner.go:195] Run: sudo crictl images --output json
I0429 12:36:20.334466   99208 main.go:141] libmachine: Making call to close driver server
I0429 12:36:20.334481   99208 main.go:141] libmachine: (functional-955425) Calling .Close
I0429 12:36:20.334781   99208 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:36:20.334807   99208 main.go:141] libmachine: Making call to close connection to plugin binary
I0429 12:36:20.334816   99208 main.go:141] libmachine: Making call to close driver server
I0429 12:36:20.334822   99208 main.go:141] libmachine: (functional-955425) Calling .Close
I0429 12:36:20.334831   99208 main.go:141] libmachine: (functional-955425) DBG | Closing plugin on server side
I0429 12:36:20.335044   99208 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:36:20.335064   99208 main.go:141] libmachine: Making call to close connection to plugin binary
I0429 12:36:20.335073   99208 main.go:141] libmachine: (functional-955425) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-955425 ssh pgrep buildkitd: exit status 1 (203.423973ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 image build -t localhost/my-image:functional-955425 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-955425 image build -t localhost/my-image:functional-955425 testdata/build --alsologtostderr: (4.74246252s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-955425 image build -t localhost/my-image:functional-955425 testdata/build --alsologtostderr:
I0429 12:36:20.700594   99262 out.go:291] Setting OutFile to fd 1 ...
I0429 12:36:20.700866   99262 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:36:20.700882   99262 out.go:304] Setting ErrFile to fd 2...
I0429 12:36:20.700888   99262 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:36:20.701151   99262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18771-82690/.minikube/bin
I0429 12:36:20.701963   99262 config.go:182] Loaded profile config "functional-955425": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0429 12:36:20.702498   99262 config.go:182] Loaded profile config "functional-955425": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0429 12:36:20.702883   99262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0429 12:36:20.702952   99262 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 12:36:20.717928   99262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43851
I0429 12:36:20.718345   99262 main.go:141] libmachine: () Calling .GetVersion
I0429 12:36:20.718910   99262 main.go:141] libmachine: Using API Version  1
I0429 12:36:20.718939   99262 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:36:20.719279   99262 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:36:20.719499   99262 main.go:141] libmachine: (functional-955425) Calling .GetState
I0429 12:36:20.721412   99262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0429 12:36:20.721457   99262 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 12:36:20.735723   99262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46773
I0429 12:36:20.736127   99262 main.go:141] libmachine: () Calling .GetVersion
I0429 12:36:20.736518   99262 main.go:141] libmachine: Using API Version  1
I0429 12:36:20.736546   99262 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:36:20.736850   99262 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:36:20.737016   99262 main.go:141] libmachine: (functional-955425) Calling .DriverName
I0429 12:36:20.737241   99262 ssh_runner.go:195] Run: systemctl --version
I0429 12:36:20.737262   99262 main.go:141] libmachine: (functional-955425) Calling .GetSSHHostname
I0429 12:36:20.739658   99262 main.go:141] libmachine: (functional-955425) DBG | domain functional-955425 has defined MAC address 52:54:00:59:03:7d in network mk-functional-955425
I0429 12:36:20.740030   99262 main.go:141] libmachine: (functional-955425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:03:7d", ip: ""} in network mk-functional-955425: {Iface:virbr1 ExpiryTime:2024-04-29 13:32:34 +0000 UTC Type:0 Mac:52:54:00:59:03:7d Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:functional-955425 Clientid:01:52:54:00:59:03:7d}
I0429 12:36:20.740063   99262 main.go:141] libmachine: (functional-955425) DBG | domain functional-955425 has defined IP address 192.168.39.183 and MAC address 52:54:00:59:03:7d in network mk-functional-955425
I0429 12:36:20.740197   99262 main.go:141] libmachine: (functional-955425) Calling .GetSSHPort
I0429 12:36:20.740359   99262 main.go:141] libmachine: (functional-955425) Calling .GetSSHKeyPath
I0429 12:36:20.740475   99262 main.go:141] libmachine: (functional-955425) Calling .GetSSHUsername
I0429 12:36:20.740626   99262 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18771-82690/.minikube/machines/functional-955425/id_rsa Username:docker}
I0429 12:36:20.829410   99262 build_images.go:161] Building image from path: /tmp/build.137140294.tar
I0429 12:36:20.829473   99262 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0429 12:36:20.844368   99262 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.137140294.tar
I0429 12:36:20.850910   99262 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.137140294.tar: stat -c "%s %y" /var/lib/minikube/build/build.137140294.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.137140294.tar': No such file or directory
I0429 12:36:20.850970   99262 ssh_runner.go:362] scp /tmp/build.137140294.tar --> /var/lib/minikube/build/build.137140294.tar (3072 bytes)
I0429 12:36:20.901965   99262 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.137140294
I0429 12:36:20.913948   99262 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.137140294 -xf /var/lib/minikube/build/build.137140294.tar
I0429 12:36:20.925141   99262 containerd.go:394] Building image: /var/lib/minikube/build/build.137140294
I0429 12:36:20.925225   99262 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.137140294 --local dockerfile=/var/lib/minikube/build/build.137140294 --output type=image,name=localhost/my-image:functional-955425
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 1.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:d8bbeabec89727bcb37e63f907cf4d77637f4b7a7c0824910e34a9412fd759ad
#8 exporting manifest sha256:d8bbeabec89727bcb37e63f907cf4d77637f4b7a7c0824910e34a9412fd759ad 0.0s done
#8 exporting config sha256:ba8bc0eb3a1a683ef9597a4c797be6f38a4360f76aca5795247c1f185b00a896 0.0s done
#8 naming to localhost/my-image:functional-955425 done
#8 DONE 0.2s
I0429 12:36:25.351094   99262 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.137140294 --local dockerfile=/var/lib/minikube/build/build.137140294 --output type=image,name=localhost/my-image:functional-955425: (4.425830719s)
I0429 12:36:25.351170   99262 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.137140294
I0429 12:36:25.368572   99262 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.137140294.tar
I0429 12:36:25.381983   99262 build_images.go:217] Built localhost/my-image:functional-955425 from /tmp/build.137140294.tar
I0429 12:36:25.382016   99262 build_images.go:133] succeeded building to: functional-955425
I0429 12:36:25.382020   99262 build_images.go:134] failed building to: 
I0429 12:36:25.382052   99262 main.go:141] libmachine: Making call to close driver server
I0429 12:36:25.382065   99262 main.go:141] libmachine: (functional-955425) Calling .Close
I0429 12:36:25.382373   99262 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:36:25.382393   99262 main.go:141] libmachine: Making call to close connection to plugin binary
I0429 12:36:25.382406   99262 main.go:141] libmachine: Making call to close driver server
I0429 12:36:25.382415   99262 main.go:141] libmachine: (functional-955425) Calling .Close
I0429 12:36:25.382417   99262 main.go:141] libmachine: (functional-955425) DBG | Closing plugin on server side
I0429 12:36:25.382677   99262 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:36:25.382693   99262 main.go:141] libmachine: (functional-955425) DBG | Closing plugin on server side
I0429 12:36:25.382704   99262 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 image ls
2024/04/29 12:36:25 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.237921729s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-955425
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (20.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-955425 /tmp/TestFunctionalparallelMountCmdany-port2445052427/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1714394142111680735" to /tmp/TestFunctionalparallelMountCmdany-port2445052427/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1714394142111680735" to /tmp/TestFunctionalparallelMountCmdany-port2445052427/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1714394142111680735" to /tmp/TestFunctionalparallelMountCmdany-port2445052427/001/test-1714394142111680735
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-955425 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (209.848595ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 29 12:35 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 29 12:35 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 29 12:35 test-1714394142111680735
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 ssh cat /mount-9p/test-1714394142111680735
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-955425 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [63323723-8f23-40ed-9401-fe77b370ec0b] Pending
helpers_test.go:344: "busybox-mount" [63323723-8f23-40ed-9401-fe77b370ec0b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [63323723-8f23-40ed-9401-fe77b370ec0b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [63323723-8f23-40ed-9401-fe77b370ec0b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 18.004435488s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-955425 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-955425 /tmp/TestFunctionalparallelMountCmdany-port2445052427/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (20.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 image load --daemon gcr.io/google-containers/addon-resizer:functional-955425 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-955425 image load --daemon gcr.io/google-containers/addon-resizer:functional-955425 --alsologtostderr: (4.699904616s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 image load --daemon gcr.io/google-containers/addon-resizer:functional-955425 --alsologtostderr
E0429 12:35:48.914512   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt: no such file or directory
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-955425 image load --daemon gcr.io/google-containers/addon-resizer:functional-955425 --alsologtostderr: (2.750104189s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.6426187s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-955425
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 image load --daemon gcr.io/google-containers/addon-resizer:functional-955425 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-955425 image load --daemon gcr.io/google-containers/addon-resizer:functional-955425 --alsologtostderr: (4.246766242s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 image save gcr.io/google-containers/addon-resizer:functional-955425 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-955425 image save gcr.io/google-containers/addon-resizer:functional-955425 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.562990508s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 image rm gcr.io/google-containers/addon-resizer:functional-955425 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-955425 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.486615801s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-955425
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 image save --daemon gcr.io/google-containers/addon-resizer:functional-955425 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-955425 image save --daemon gcr.io/google-containers/addon-resizer:functional-955425 --alsologtostderr: (2.520581582s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-955425
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-955425 /tmp/TestFunctionalparallelMountCmdspecific-port2746939634/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-955425 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (320.554395ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-955425 /tmp/TestFunctionalparallelMountCmdspecific-port2746939634/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-955425 ssh "sudo umount -f /mount-9p": exit status 1 (260.375441ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-955425 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-955425 /tmp/TestFunctionalparallelMountCmdspecific-port2746939634/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-955425 /tmp/TestFunctionalparallelMountCmdVerifyCleanup594417839/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-955425 /tmp/TestFunctionalparallelMountCmdVerifyCleanup594417839/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-955425 /tmp/TestFunctionalparallelMountCmdVerifyCleanup594417839/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-955425 ssh "findmnt -T" /mount1: exit status 1 (502.583857ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-955425 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-955425 /tmp/TestFunctionalparallelMountCmdVerifyCleanup594417839/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-955425 /tmp/TestFunctionalparallelMountCmdVerifyCleanup594417839/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-955425 /tmp/TestFunctionalparallelMountCmdVerifyCleanup594417839/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-955425 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-955425 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-kcqwc" [0d39307d-a750-411b-bd09-be605db4534f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-kcqwc" [0d39307d-a750-411b-bd09-be605db4534f] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004839885s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "274.054097ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "60.918735ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "233.812556ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "59.120518ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 service list
functional_test.go:1455: (dbg) Done: out/minikube-linux-amd64 -p functional-955425 service list: (1.238114049s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-linux-amd64 -p functional-955425 service list -o json: (1.373493233s)
functional_test.go:1490: Took "1.373602663s" to run "out/minikube-linux-amd64 -p functional-955425 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.183:31056
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-955425 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.183:31056
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.30s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-955425
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-955425
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-955425
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (221s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-303559 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0429 12:38:05.070908   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt: no such file or directory
E0429 12:38:32.755027   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-303559 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (3m40.299733298s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (221.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-303559 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-303559 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-303559 -- rollout status deployment/busybox: (5.408193921s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-303559 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-303559 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-303559 -- exec busybox-fc5497c4f-499ls -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-303559 -- exec busybox-fc5497c4f-lstlq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-303559 -- exec busybox-fc5497c4f-mmnzb -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-303559 -- exec busybox-fc5497c4f-499ls -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-303559 -- exec busybox-fc5497c4f-lstlq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-303559 -- exec busybox-fc5497c4f-mmnzb -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-303559 -- exec busybox-fc5497c4f-499ls -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-303559 -- exec busybox-fc5497c4f-lstlq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-303559 -- exec busybox-fc5497c4f-mmnzb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-303559 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-303559 -- exec busybox-fc5497c4f-499ls -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-303559 -- exec busybox-fc5497c4f-499ls -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-303559 -- exec busybox-fc5497c4f-lstlq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-303559 -- exec busybox-fc5497c4f-lstlq -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-303559 -- exec busybox-fc5497c4f-mmnzb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-303559 -- exec busybox-fc5497c4f-mmnzb -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (50.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-303559 -v=7 --alsologtostderr
E0429 12:40:40.867873   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/functional-955425/client.crt: no such file or directory
E0429 12:40:40.873150   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/functional-955425/client.crt: no such file or directory
E0429 12:40:40.883488   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/functional-955425/client.crt: no such file or directory
E0429 12:40:40.903798   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/functional-955425/client.crt: no such file or directory
E0429 12:40:40.944198   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/functional-955425/client.crt: no such file or directory
E0429 12:40:41.024553   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/functional-955425/client.crt: no such file or directory
E0429 12:40:41.184879   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/functional-955425/client.crt: no such file or directory
E0429 12:40:41.505471   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/functional-955425/client.crt: no such file or directory
E0429 12:40:42.146148   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/functional-955425/client.crt: no such file or directory
E0429 12:40:43.427093   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/functional-955425/client.crt: no such file or directory
E0429 12:40:45.987857   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/functional-955425/client.crt: no such file or directory
E0429 12:40:51.108812   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/functional-955425/client.crt: no such file or directory
E0429 12:41:01.350114   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/functional-955425/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-303559 -v=7 --alsologtostderr: (50.090414943s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (50.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-303559 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 cp testdata/cp-test.txt ha-303559:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 ssh -n ha-303559 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 cp ha-303559:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile522776310/001/cp-test_ha-303559.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 ssh -n ha-303559 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 cp ha-303559:/home/docker/cp-test.txt ha-303559-m02:/home/docker/cp-test_ha-303559_ha-303559-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 ssh -n ha-303559 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 ssh -n ha-303559-m02 "sudo cat /home/docker/cp-test_ha-303559_ha-303559-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 cp ha-303559:/home/docker/cp-test.txt ha-303559-m03:/home/docker/cp-test_ha-303559_ha-303559-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 ssh -n ha-303559 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 ssh -n ha-303559-m03 "sudo cat /home/docker/cp-test_ha-303559_ha-303559-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 cp ha-303559:/home/docker/cp-test.txt ha-303559-m04:/home/docker/cp-test_ha-303559_ha-303559-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 ssh -n ha-303559 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 ssh -n ha-303559-m04 "sudo cat /home/docker/cp-test_ha-303559_ha-303559-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 cp testdata/cp-test.txt ha-303559-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 ssh -n ha-303559-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 cp ha-303559-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile522776310/001/cp-test_ha-303559-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 ssh -n ha-303559-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 cp ha-303559-m02:/home/docker/cp-test.txt ha-303559:/home/docker/cp-test_ha-303559-m02_ha-303559.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 ssh -n ha-303559-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 ssh -n ha-303559 "sudo cat /home/docker/cp-test_ha-303559-m02_ha-303559.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 cp ha-303559-m02:/home/docker/cp-test.txt ha-303559-m03:/home/docker/cp-test_ha-303559-m02_ha-303559-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 ssh -n ha-303559-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 ssh -n ha-303559-m03 "sudo cat /home/docker/cp-test_ha-303559-m02_ha-303559-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 cp ha-303559-m02:/home/docker/cp-test.txt ha-303559-m04:/home/docker/cp-test_ha-303559-m02_ha-303559-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 ssh -n ha-303559-m02 "sudo cat /home/docker/cp-test.txt"
E0429 12:41:21.830945   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/functional-955425/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 ssh -n ha-303559-m04 "sudo cat /home/docker/cp-test_ha-303559-m02_ha-303559-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 cp testdata/cp-test.txt ha-303559-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 ssh -n ha-303559-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 cp ha-303559-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile522776310/001/cp-test_ha-303559-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 ssh -n ha-303559-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 cp ha-303559-m03:/home/docker/cp-test.txt ha-303559:/home/docker/cp-test_ha-303559-m03_ha-303559.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 ssh -n ha-303559-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 ssh -n ha-303559 "sudo cat /home/docker/cp-test_ha-303559-m03_ha-303559.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 cp ha-303559-m03:/home/docker/cp-test.txt ha-303559-m02:/home/docker/cp-test_ha-303559-m03_ha-303559-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 ssh -n ha-303559-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 ssh -n ha-303559-m02 "sudo cat /home/docker/cp-test_ha-303559-m03_ha-303559-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 cp ha-303559-m03:/home/docker/cp-test.txt ha-303559-m04:/home/docker/cp-test_ha-303559-m03_ha-303559-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 ssh -n ha-303559-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 ssh -n ha-303559-m04 "sudo cat /home/docker/cp-test_ha-303559-m03_ha-303559-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 cp testdata/cp-test.txt ha-303559-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 ssh -n ha-303559-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 cp ha-303559-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile522776310/001/cp-test_ha-303559-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 ssh -n ha-303559-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 cp ha-303559-m04:/home/docker/cp-test.txt ha-303559:/home/docker/cp-test_ha-303559-m04_ha-303559.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 ssh -n ha-303559-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 ssh -n ha-303559 "sudo cat /home/docker/cp-test_ha-303559-m04_ha-303559.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 cp ha-303559-m04:/home/docker/cp-test.txt ha-303559-m02:/home/docker/cp-test_ha-303559-m04_ha-303559-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 ssh -n ha-303559-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 ssh -n ha-303559-m02 "sudo cat /home/docker/cp-test_ha-303559-m04_ha-303559-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 cp ha-303559-m04:/home/docker/cp-test.txt ha-303559-m03:/home/docker/cp-test_ha-303559-m04_ha-303559-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 ssh -n ha-303559-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 ssh -n ha-303559-m03 "sudo cat /home/docker/cp-test_ha-303559-m04_ha-303559-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (93.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 node stop m02 -v=7 --alsologtostderr
E0429 12:42:02.791167   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/functional-955425/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-303559 node stop m02 -v=7 --alsologtostderr: (1m32.474673942s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-303559 status -v=7 --alsologtostderr: exit status 7 (683.047738ms)

                                                
                                                
-- stdout --
	ha-303559
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-303559-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-303559-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-303559-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 12:43:01.153788  104062 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:43:01.154087  104062 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:43:01.154098  104062 out.go:304] Setting ErrFile to fd 2...
	I0429 12:43:01.154102  104062 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:43:01.154352  104062 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18771-82690/.minikube/bin
	I0429 12:43:01.154600  104062 out.go:298] Setting JSON to false
	I0429 12:43:01.154639  104062 mustload.go:65] Loading cluster: ha-303559
	I0429 12:43:01.154770  104062 notify.go:220] Checking for updates...
	I0429 12:43:01.155123  104062 config.go:182] Loaded profile config "ha-303559": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0429 12:43:01.155141  104062 status.go:255] checking status of ha-303559 ...
	I0429 12:43:01.155646  104062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:43:01.155749  104062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:43:01.175723  104062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37315
	I0429 12:43:01.176294  104062 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:43:01.176858  104062 main.go:141] libmachine: Using API Version  1
	I0429 12:43:01.176880  104062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:43:01.177242  104062 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:43:01.177439  104062 main.go:141] libmachine: (ha-303559) Calling .GetState
	I0429 12:43:01.178873  104062 status.go:330] ha-303559 host status = "Running" (err=<nil>)
	I0429 12:43:01.178910  104062 host.go:66] Checking if "ha-303559" exists ...
	I0429 12:43:01.179177  104062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:43:01.179212  104062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:43:01.194439  104062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41911
	I0429 12:43:01.194865  104062 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:43:01.195327  104062 main.go:141] libmachine: Using API Version  1
	I0429 12:43:01.195350  104062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:43:01.195657  104062 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:43:01.195870  104062 main.go:141] libmachine: (ha-303559) Calling .GetIP
	I0429 12:43:01.198395  104062 main.go:141] libmachine: (ha-303559) DBG | domain ha-303559 has defined MAC address 52:54:00:5c:22:45 in network mk-ha-303559
	I0429 12:43:01.198868  104062 main.go:141] libmachine: (ha-303559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:22:45", ip: ""} in network mk-ha-303559: {Iface:virbr1 ExpiryTime:2024-04-29 13:36:48 +0000 UTC Type:0 Mac:52:54:00:5c:22:45 Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-303559 Clientid:01:52:54:00:5c:22:45}
	I0429 12:43:01.198902  104062 main.go:141] libmachine: (ha-303559) DBG | domain ha-303559 has defined IP address 192.168.39.119 and MAC address 52:54:00:5c:22:45 in network mk-ha-303559
	I0429 12:43:01.199073  104062 host.go:66] Checking if "ha-303559" exists ...
	I0429 12:43:01.199402  104062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:43:01.199446  104062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:43:01.214081  104062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36017
	I0429 12:43:01.214532  104062 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:43:01.215045  104062 main.go:141] libmachine: Using API Version  1
	I0429 12:43:01.215074  104062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:43:01.215423  104062 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:43:01.215653  104062 main.go:141] libmachine: (ha-303559) Calling .DriverName
	I0429 12:43:01.215907  104062 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:43:01.215947  104062 main.go:141] libmachine: (ha-303559) Calling .GetSSHHostname
	I0429 12:43:01.218741  104062 main.go:141] libmachine: (ha-303559) DBG | domain ha-303559 has defined MAC address 52:54:00:5c:22:45 in network mk-ha-303559
	I0429 12:43:01.219142  104062 main.go:141] libmachine: (ha-303559) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:22:45", ip: ""} in network mk-ha-303559: {Iface:virbr1 ExpiryTime:2024-04-29 13:36:48 +0000 UTC Type:0 Mac:52:54:00:5c:22:45 Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-303559 Clientid:01:52:54:00:5c:22:45}
	I0429 12:43:01.219181  104062 main.go:141] libmachine: (ha-303559) DBG | domain ha-303559 has defined IP address 192.168.39.119 and MAC address 52:54:00:5c:22:45 in network mk-ha-303559
	I0429 12:43:01.219387  104062 main.go:141] libmachine: (ha-303559) Calling .GetSSHPort
	I0429 12:43:01.219552  104062 main.go:141] libmachine: (ha-303559) Calling .GetSSHKeyPath
	I0429 12:43:01.219717  104062 main.go:141] libmachine: (ha-303559) Calling .GetSSHUsername
	I0429 12:43:01.219863  104062 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18771-82690/.minikube/machines/ha-303559/id_rsa Username:docker}
	I0429 12:43:01.311105  104062 ssh_runner.go:195] Run: systemctl --version
	I0429 12:43:01.319426  104062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:43:01.339994  104062 kubeconfig.go:125] found "ha-303559" server: "https://192.168.39.254:8443"
	I0429 12:43:01.340028  104062 api_server.go:166] Checking apiserver status ...
	I0429 12:43:01.340071  104062 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 12:43:01.358562  104062 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1239/cgroup
	W0429 12:43:01.370811  104062 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1239/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 12:43:01.370857  104062 ssh_runner.go:195] Run: ls
	I0429 12:43:01.376355  104062 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 12:43:01.380703  104062 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 12:43:01.380729  104062 status.go:422] ha-303559 apiserver status = Running (err=<nil>)
	I0429 12:43:01.380740  104062 status.go:257] ha-303559 status: &{Name:ha-303559 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:43:01.380760  104062 status.go:255] checking status of ha-303559-m02 ...
	I0429 12:43:01.381060  104062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:43:01.381101  104062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:43:01.396177  104062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36669
	I0429 12:43:01.396680  104062 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:43:01.397149  104062 main.go:141] libmachine: Using API Version  1
	I0429 12:43:01.397170  104062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:43:01.397514  104062 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:43:01.397792  104062 main.go:141] libmachine: (ha-303559-m02) Calling .GetState
	I0429 12:43:01.399705  104062 status.go:330] ha-303559-m02 host status = "Stopped" (err=<nil>)
	I0429 12:43:01.399722  104062 status.go:343] host is not running, skipping remaining checks
	I0429 12:43:01.399730  104062 status.go:257] ha-303559-m02 status: &{Name:ha-303559-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:43:01.399748  104062 status.go:255] checking status of ha-303559-m03 ...
	I0429 12:43:01.400167  104062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:43:01.400214  104062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:43:01.415332  104062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44383
	I0429 12:43:01.415855  104062 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:43:01.416332  104062 main.go:141] libmachine: Using API Version  1
	I0429 12:43:01.416357  104062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:43:01.416709  104062 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:43:01.416950  104062 main.go:141] libmachine: (ha-303559-m03) Calling .GetState
	I0429 12:43:01.418500  104062 status.go:330] ha-303559-m03 host status = "Running" (err=<nil>)
	I0429 12:43:01.418521  104062 host.go:66] Checking if "ha-303559-m03" exists ...
	I0429 12:43:01.418945  104062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:43:01.418994  104062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:43:01.433723  104062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36341
	I0429 12:43:01.434176  104062 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:43:01.434634  104062 main.go:141] libmachine: Using API Version  1
	I0429 12:43:01.434657  104062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:43:01.435009  104062 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:43:01.435168  104062 main.go:141] libmachine: (ha-303559-m03) Calling .GetIP
	I0429 12:43:01.438002  104062 main.go:141] libmachine: (ha-303559-m03) DBG | domain ha-303559-m03 has defined MAC address 52:54:00:d1:c6:75 in network mk-ha-303559
	I0429 12:43:01.438427  104062 main.go:141] libmachine: (ha-303559-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:c6:75", ip: ""} in network mk-ha-303559: {Iface:virbr1 ExpiryTime:2024-04-29 13:39:21 +0000 UTC Type:0 Mac:52:54:00:d1:c6:75 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-303559-m03 Clientid:01:52:54:00:d1:c6:75}
	I0429 12:43:01.438462  104062 main.go:141] libmachine: (ha-303559-m03) DBG | domain ha-303559-m03 has defined IP address 192.168.39.208 and MAC address 52:54:00:d1:c6:75 in network mk-ha-303559
	I0429 12:43:01.438584  104062 host.go:66] Checking if "ha-303559-m03" exists ...
	I0429 12:43:01.438908  104062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:43:01.438943  104062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:43:01.453585  104062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33743
	I0429 12:43:01.454005  104062 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:43:01.454464  104062 main.go:141] libmachine: Using API Version  1
	I0429 12:43:01.454485  104062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:43:01.454782  104062 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:43:01.454953  104062 main.go:141] libmachine: (ha-303559-m03) Calling .DriverName
	I0429 12:43:01.455122  104062 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:43:01.455146  104062 main.go:141] libmachine: (ha-303559-m03) Calling .GetSSHHostname
	I0429 12:43:01.457903  104062 main.go:141] libmachine: (ha-303559-m03) DBG | domain ha-303559-m03 has defined MAC address 52:54:00:d1:c6:75 in network mk-ha-303559
	I0429 12:43:01.458328  104062 main.go:141] libmachine: (ha-303559-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:c6:75", ip: ""} in network mk-ha-303559: {Iface:virbr1 ExpiryTime:2024-04-29 13:39:21 +0000 UTC Type:0 Mac:52:54:00:d1:c6:75 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-303559-m03 Clientid:01:52:54:00:d1:c6:75}
	I0429 12:43:01.458365  104062 main.go:141] libmachine: (ha-303559-m03) DBG | domain ha-303559-m03 has defined IP address 192.168.39.208 and MAC address 52:54:00:d1:c6:75 in network mk-ha-303559
	I0429 12:43:01.458503  104062 main.go:141] libmachine: (ha-303559-m03) Calling .GetSSHPort
	I0429 12:43:01.458673  104062 main.go:141] libmachine: (ha-303559-m03) Calling .GetSSHKeyPath
	I0429 12:43:01.458796  104062 main.go:141] libmachine: (ha-303559-m03) Calling .GetSSHUsername
	I0429 12:43:01.458971  104062 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18771-82690/.minikube/machines/ha-303559-m03/id_rsa Username:docker}
	I0429 12:43:01.543793  104062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:43:01.564212  104062 kubeconfig.go:125] found "ha-303559" server: "https://192.168.39.254:8443"
	I0429 12:43:01.564246  104062 api_server.go:166] Checking apiserver status ...
	I0429 12:43:01.564286  104062 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 12:43:01.582322  104062 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1241/cgroup
	W0429 12:43:01.595458  104062 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1241/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 12:43:01.595535  104062 ssh_runner.go:195] Run: ls
	I0429 12:43:01.601180  104062 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 12:43:01.607528  104062 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 12:43:01.607555  104062 status.go:422] ha-303559-m03 apiserver status = Running (err=<nil>)
	I0429 12:43:01.607563  104062 status.go:257] ha-303559-m03 status: &{Name:ha-303559-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:43:01.607578  104062 status.go:255] checking status of ha-303559-m04 ...
	I0429 12:43:01.607910  104062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:43:01.607956  104062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:43:01.622886  104062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43779
	I0429 12:43:01.623327  104062 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:43:01.623798  104062 main.go:141] libmachine: Using API Version  1
	I0429 12:43:01.623819  104062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:43:01.624139  104062 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:43:01.624342  104062 main.go:141] libmachine: (ha-303559-m04) Calling .GetState
	I0429 12:43:01.626043  104062 status.go:330] ha-303559-m04 host status = "Running" (err=<nil>)
	I0429 12:43:01.626066  104062 host.go:66] Checking if "ha-303559-m04" exists ...
	I0429 12:43:01.626480  104062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:43:01.626536  104062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:43:01.642029  104062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42561
	I0429 12:43:01.642456  104062 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:43:01.642979  104062 main.go:141] libmachine: Using API Version  1
	I0429 12:43:01.643008  104062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:43:01.643382  104062 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:43:01.643595  104062 main.go:141] libmachine: (ha-303559-m04) Calling .GetIP
	I0429 12:43:01.646547  104062 main.go:141] libmachine: (ha-303559-m04) DBG | domain ha-303559-m04 has defined MAC address 52:54:00:78:94:95 in network mk-ha-303559
	I0429 12:43:01.647030  104062 main.go:141] libmachine: (ha-303559-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:94:95", ip: ""} in network mk-ha-303559: {Iface:virbr1 ExpiryTime:2024-04-29 13:40:39 +0000 UTC Type:0 Mac:52:54:00:78:94:95 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-303559-m04 Clientid:01:52:54:00:78:94:95}
	I0429 12:43:01.647049  104062 main.go:141] libmachine: (ha-303559-m04) DBG | domain ha-303559-m04 has defined IP address 192.168.39.133 and MAC address 52:54:00:78:94:95 in network mk-ha-303559
	I0429 12:43:01.647195  104062 host.go:66] Checking if "ha-303559-m04" exists ...
	I0429 12:43:01.647545  104062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:43:01.647589  104062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:43:01.662589  104062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33051
	I0429 12:43:01.663018  104062 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:43:01.663450  104062 main.go:141] libmachine: Using API Version  1
	I0429 12:43:01.663477  104062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:43:01.663792  104062 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:43:01.663992  104062 main.go:141] libmachine: (ha-303559-m04) Calling .DriverName
	I0429 12:43:01.664183  104062 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:43:01.664215  104062 main.go:141] libmachine: (ha-303559-m04) Calling .GetSSHHostname
	I0429 12:43:01.667214  104062 main.go:141] libmachine: (ha-303559-m04) DBG | domain ha-303559-m04 has defined MAC address 52:54:00:78:94:95 in network mk-ha-303559
	I0429 12:43:01.667745  104062 main.go:141] libmachine: (ha-303559-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:94:95", ip: ""} in network mk-ha-303559: {Iface:virbr1 ExpiryTime:2024-04-29 13:40:39 +0000 UTC Type:0 Mac:52:54:00:78:94:95 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-303559-m04 Clientid:01:52:54:00:78:94:95}
	I0429 12:43:01.667774  104062 main.go:141] libmachine: (ha-303559-m04) DBG | domain ha-303559-m04 has defined IP address 192.168.39.133 and MAC address 52:54:00:78:94:95 in network mk-ha-303559
	I0429 12:43:01.667915  104062 main.go:141] libmachine: (ha-303559-m04) Calling .GetSSHPort
	I0429 12:43:01.668094  104062 main.go:141] libmachine: (ha-303559-m04) Calling .GetSSHKeyPath
	I0429 12:43:01.668195  104062 main.go:141] libmachine: (ha-303559-m04) Calling .GetSSHUsername
	I0429 12:43:01.668379  104062 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18771-82690/.minikube/machines/ha-303559-m04/id_rsa Username:docker}
	I0429 12:43:01.757163  104062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:43:01.776692  104062 status.go:257] ha-303559-m04 status: &{Name:ha-303559-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (93.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (45.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 node start m02 -v=7 --alsologtostderr
E0429 12:43:05.069476   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt: no such file or directory
E0429 12:43:24.712247   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/functional-955425/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-303559 node start m02 -v=7 --alsologtostderr: (44.484405057s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (45.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (450.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-303559 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-303559 -v=7 --alsologtostderr
E0429 12:45:40.868363   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/functional-955425/client.crt: no such file or directory
E0429 12:46:08.552692   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/functional-955425/client.crt: no such file or directory
E0429 12:48:05.069420   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-303559 -v=7 --alsologtostderr: (4m38.727867728s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-303559 --wait=true -v=7 --alsologtostderr
E0429 12:49:28.115794   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt: no such file or directory
E0429 12:50:40.868448   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/functional-955425/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-303559 --wait=true -v=7 --alsologtostderr: (2m51.663213175s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-303559
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (450.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (8.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-303559 node delete m03 -v=7 --alsologtostderr: (7.448031179s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (8.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (276.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 stop -v=7 --alsologtostderr
E0429 12:53:05.069220   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt: no such file or directory
E0429 12:55:40.868606   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/functional-955425/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-303559 stop -v=7 --alsologtostderr: (4m36.30950027s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-303559 status -v=7 --alsologtostderr: exit status 7 (114.588116ms)

                                                
                                                
-- stdout --
	ha-303559
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-303559-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-303559-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 12:56:03.712998  107895 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:56:03.713095  107895 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:56:03.713111  107895 out.go:304] Setting ErrFile to fd 2...
	I0429 12:56:03.713115  107895 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:56:03.713317  107895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18771-82690/.minikube/bin
	I0429 12:56:03.713497  107895 out.go:298] Setting JSON to false
	I0429 12:56:03.713524  107895 mustload.go:65] Loading cluster: ha-303559
	I0429 12:56:03.713626  107895 notify.go:220] Checking for updates...
	I0429 12:56:03.713923  107895 config.go:182] Loaded profile config "ha-303559": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0429 12:56:03.713938  107895 status.go:255] checking status of ha-303559 ...
	I0429 12:56:03.714311  107895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:56:03.714363  107895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:56:03.734857  107895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45885
	I0429 12:56:03.735240  107895 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:56:03.735788  107895 main.go:141] libmachine: Using API Version  1
	I0429 12:56:03.735814  107895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:56:03.736182  107895 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:56:03.736381  107895 main.go:141] libmachine: (ha-303559) Calling .GetState
	I0429 12:56:03.737910  107895 status.go:330] ha-303559 host status = "Stopped" (err=<nil>)
	I0429 12:56:03.737926  107895 status.go:343] host is not running, skipping remaining checks
	I0429 12:56:03.737934  107895 status.go:257] ha-303559 status: &{Name:ha-303559 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:56:03.737957  107895 status.go:255] checking status of ha-303559-m02 ...
	I0429 12:56:03.738246  107895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:56:03.738280  107895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:56:03.752228  107895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35839
	I0429 12:56:03.752604  107895 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:56:03.753003  107895 main.go:141] libmachine: Using API Version  1
	I0429 12:56:03.753024  107895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:56:03.753369  107895 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:56:03.753528  107895 main.go:141] libmachine: (ha-303559-m02) Calling .GetState
	I0429 12:56:03.754965  107895 status.go:330] ha-303559-m02 host status = "Stopped" (err=<nil>)
	I0429 12:56:03.754974  107895 status.go:343] host is not running, skipping remaining checks
	I0429 12:56:03.754981  107895 status.go:257] ha-303559-m02 status: &{Name:ha-303559-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:56:03.755003  107895 status.go:255] checking status of ha-303559-m04 ...
	I0429 12:56:03.755276  107895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:56:03.755319  107895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:56:03.769361  107895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45939
	I0429 12:56:03.769693  107895 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:56:03.770137  107895 main.go:141] libmachine: Using API Version  1
	I0429 12:56:03.770170  107895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:56:03.770466  107895 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:56:03.770659  107895 main.go:141] libmachine: (ha-303559-m04) Calling .GetState
	I0429 12:56:03.772184  107895 status.go:330] ha-303559-m04 host status = "Stopped" (err=<nil>)
	I0429 12:56:03.772197  107895 status.go:343] host is not running, skipping remaining checks
	I0429 12:56:03.772203  107895 status.go:257] ha-303559-m04 status: &{Name:ha-303559-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (276.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (162.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-303559 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0429 12:57:03.913014   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/functional-955425/client.crt: no such file or directory
E0429 12:58:05.069328   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-303559 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m41.886676966s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (162.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (69.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-303559 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-303559 --control-plane -v=7 --alsologtostderr: (1m8.725515602s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-303559 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (69.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.57s)

                                                
                                    
x
+
TestJSONOutput/start/Command (103.69s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-726109 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E0429 13:00:40.868043   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/functional-955425/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-726109 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m43.69303028s)
--- PASS: TestJSONOutput/start/Command (103.69s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.78s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-726109 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.78s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-726109 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-726109 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-726109 --output=json --user=testUser: (7.352329918s)
--- PASS: TestJSONOutput/stop/Command (7.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-796869 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-796869 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (78.519392ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"858c587e-0e41-4f9f-8196-880e6776d77e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-796869] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fce300f3-c9c2-4fbb-92ac-5d3cafb2d7fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18771"}}
	{"specversion":"1.0","id":"4e15591a-b561-462d-8244-ebc29fb3ef54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5831a21d-8246-4ef0-aa42-d986dade9a9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18771-82690/kubeconfig"}}
	{"specversion":"1.0","id":"b747a96f-6e98-4b1f-b492-4a6ad30d68ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18771-82690/.minikube"}}
	{"specversion":"1.0","id":"292c194a-6cb9-470b-8174-acde0a16f19b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"df8a4beb-93b2-4492-9447-bde461d19c24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4df3ea5f-34e0-4770-940d-24a214fa10ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-796869" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-796869
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (101.02s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-162476 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-162476 --driver=kvm2  --container-runtime=containerd: (50.758875844s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-166024 --driver=kvm2  --container-runtime=containerd
E0429 13:03:05.069152   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-166024 --driver=kvm2  --container-runtime=containerd: (47.351825723s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-162476
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-166024
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-166024" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-166024
helpers_test.go:175: Cleaning up "first-162476" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-162476
--- PASS: TestMinikubeProfile (101.02s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.41s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-902540 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-902540 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (28.411789425s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-902540 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-902540 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.86s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-922506 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-922506 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (28.856407781s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-922506 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-922506 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.65s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-902540 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-922506 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-922506 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-922506
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-922506: (2.290608242s)
--- PASS: TestMountStart/serial/Stop (2.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (25.02s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-922506
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-922506: (24.022686603s)
--- PASS: TestMountStart/serial/RestartStopped (25.02s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-922506 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-922506 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (107.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-334446 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0429 13:05:40.868092   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/functional-955425/client.crt: no such file or directory
E0429 13:06:08.116442   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-334446 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m46.6496792s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (107.08s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334446 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334446 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-334446 -- rollout status deployment/busybox: (4.366230274s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334446 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334446 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334446 -- exec busybox-fc5497c4f-phxzc -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334446 -- exec busybox-fc5497c4f-zrb57 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334446 -- exec busybox-fc5497c4f-phxzc -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334446 -- exec busybox-fc5497c4f-zrb57 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334446 -- exec busybox-fc5497c4f-phxzc -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334446 -- exec busybox-fc5497c4f-zrb57 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.01s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334446 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334446 -- exec busybox-fc5497c4f-phxzc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334446 -- exec busybox-fc5497c4f-phxzc -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334446 -- exec busybox-fc5497c4f-zrb57 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334446 -- exec busybox-fc5497c4f-zrb57 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-334446 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-334446 -v 3 --alsologtostderr: (42.043668369s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.62s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-334446 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 cp testdata/cp-test.txt multinode-334446:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 ssh -n multinode-334446 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 cp multinode-334446:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2177164690/001/cp-test_multinode-334446.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 ssh -n multinode-334446 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 cp multinode-334446:/home/docker/cp-test.txt multinode-334446-m02:/home/docker/cp-test_multinode-334446_multinode-334446-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 ssh -n multinode-334446 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 ssh -n multinode-334446-m02 "sudo cat /home/docker/cp-test_multinode-334446_multinode-334446-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 cp multinode-334446:/home/docker/cp-test.txt multinode-334446-m03:/home/docker/cp-test_multinode-334446_multinode-334446-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 ssh -n multinode-334446 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 ssh -n multinode-334446-m03 "sudo cat /home/docker/cp-test_multinode-334446_multinode-334446-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 cp testdata/cp-test.txt multinode-334446-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 ssh -n multinode-334446-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 cp multinode-334446-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2177164690/001/cp-test_multinode-334446-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 ssh -n multinode-334446-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 cp multinode-334446-m02:/home/docker/cp-test.txt multinode-334446:/home/docker/cp-test_multinode-334446-m02_multinode-334446.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 ssh -n multinode-334446-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 ssh -n multinode-334446 "sudo cat /home/docker/cp-test_multinode-334446-m02_multinode-334446.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 cp multinode-334446-m02:/home/docker/cp-test.txt multinode-334446-m03:/home/docker/cp-test_multinode-334446-m02_multinode-334446-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 ssh -n multinode-334446-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 ssh -n multinode-334446-m03 "sudo cat /home/docker/cp-test_multinode-334446-m02_multinode-334446-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 cp testdata/cp-test.txt multinode-334446-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 ssh -n multinode-334446-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 cp multinode-334446-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2177164690/001/cp-test_multinode-334446-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 ssh -n multinode-334446-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 cp multinode-334446-m03:/home/docker/cp-test.txt multinode-334446:/home/docker/cp-test_multinode-334446-m03_multinode-334446.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 ssh -n multinode-334446-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 ssh -n multinode-334446 "sudo cat /home/docker/cp-test_multinode-334446-m03_multinode-334446.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 cp multinode-334446-m03:/home/docker/cp-test.txt multinode-334446-m02:/home/docker/cp-test_multinode-334446-m03_multinode-334446-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 ssh -n multinode-334446-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 ssh -n multinode-334446-m02 "sudo cat /home/docker/cp-test_multinode-334446-m03_multinode-334446-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.63s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-334446 node stop m03: (1.498608202s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-334446 status: exit status 7 (428.713618ms)

                                                
                                                
-- stdout --
	multinode-334446
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-334446-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-334446-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-334446 status --alsologtostderr: exit status 7 (433.086381ms)

                                                
                                                
-- stdout --
	multinode-334446
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-334446-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-334446-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 13:07:50.873558  115621 out.go:291] Setting OutFile to fd 1 ...
	I0429 13:07:50.873761  115621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:07:50.873770  115621 out.go:304] Setting ErrFile to fd 2...
	I0429 13:07:50.873774  115621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:07:50.873957  115621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18771-82690/.minikube/bin
	I0429 13:07:50.874105  115621 out.go:298] Setting JSON to false
	I0429 13:07:50.874130  115621 mustload.go:65] Loading cluster: multinode-334446
	I0429 13:07:50.874174  115621 notify.go:220] Checking for updates...
	I0429 13:07:50.874488  115621 config.go:182] Loaded profile config "multinode-334446": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0429 13:07:50.874502  115621 status.go:255] checking status of multinode-334446 ...
	I0429 13:07:50.874905  115621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 13:07:50.874957  115621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:07:50.891871  115621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45169
	I0429 13:07:50.892313  115621 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:07:50.892916  115621 main.go:141] libmachine: Using API Version  1
	I0429 13:07:50.892949  115621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:07:50.893266  115621 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:07:50.893472  115621 main.go:141] libmachine: (multinode-334446) Calling .GetState
	I0429 13:07:50.894916  115621 status.go:330] multinode-334446 host status = "Running" (err=<nil>)
	I0429 13:07:50.894933  115621 host.go:66] Checking if "multinode-334446" exists ...
	I0429 13:07:50.895224  115621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 13:07:50.895260  115621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:07:50.910198  115621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40497
	I0429 13:07:50.910702  115621 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:07:50.911126  115621 main.go:141] libmachine: Using API Version  1
	I0429 13:07:50.911145  115621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:07:50.911448  115621 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:07:50.911714  115621 main.go:141] libmachine: (multinode-334446) Calling .GetIP
	I0429 13:07:50.914466  115621 main.go:141] libmachine: (multinode-334446) DBG | domain multinode-334446 has defined MAC address 52:54:00:7f:e7:83 in network mk-multinode-334446
	I0429 13:07:50.914840  115621 main.go:141] libmachine: (multinode-334446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e7:83", ip: ""} in network mk-multinode-334446: {Iface:virbr1 ExpiryTime:2024-04-29 14:05:20 +0000 UTC Type:0 Mac:52:54:00:7f:e7:83 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-334446 Clientid:01:52:54:00:7f:e7:83}
	I0429 13:07:50.914869  115621 main.go:141] libmachine: (multinode-334446) DBG | domain multinode-334446 has defined IP address 192.168.39.183 and MAC address 52:54:00:7f:e7:83 in network mk-multinode-334446
	I0429 13:07:50.915002  115621 host.go:66] Checking if "multinode-334446" exists ...
	I0429 13:07:50.915301  115621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 13:07:50.915348  115621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:07:50.930106  115621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40189
	I0429 13:07:50.930480  115621 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:07:50.930927  115621 main.go:141] libmachine: Using API Version  1
	I0429 13:07:50.930952  115621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:07:50.931285  115621 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:07:50.931478  115621 main.go:141] libmachine: (multinode-334446) Calling .DriverName
	I0429 13:07:50.931694  115621 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 13:07:50.931736  115621 main.go:141] libmachine: (multinode-334446) Calling .GetSSHHostname
	I0429 13:07:50.934343  115621 main.go:141] libmachine: (multinode-334446) DBG | domain multinode-334446 has defined MAC address 52:54:00:7f:e7:83 in network mk-multinode-334446
	I0429 13:07:50.934746  115621 main.go:141] libmachine: (multinode-334446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:e7:83", ip: ""} in network mk-multinode-334446: {Iface:virbr1 ExpiryTime:2024-04-29 14:05:20 +0000 UTC Type:0 Mac:52:54:00:7f:e7:83 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-334446 Clientid:01:52:54:00:7f:e7:83}
	I0429 13:07:50.934774  115621 main.go:141] libmachine: (multinode-334446) DBG | domain multinode-334446 has defined IP address 192.168.39.183 and MAC address 52:54:00:7f:e7:83 in network mk-multinode-334446
	I0429 13:07:50.934890  115621 main.go:141] libmachine: (multinode-334446) Calling .GetSSHPort
	I0429 13:07:50.935058  115621 main.go:141] libmachine: (multinode-334446) Calling .GetSSHKeyPath
	I0429 13:07:50.935238  115621 main.go:141] libmachine: (multinode-334446) Calling .GetSSHUsername
	I0429 13:07:50.935372  115621 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18771-82690/.minikube/machines/multinode-334446/id_rsa Username:docker}
	I0429 13:07:51.015729  115621 ssh_runner.go:195] Run: systemctl --version
	I0429 13:07:51.022251  115621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 13:07:51.037189  115621 kubeconfig.go:125] found "multinode-334446" server: "https://192.168.39.183:8443"
	I0429 13:07:51.037224  115621 api_server.go:166] Checking apiserver status ...
	I0429 13:07:51.037276  115621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 13:07:51.050890  115621 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1190/cgroup
	W0429 13:07:51.060965  115621 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1190/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 13:07:51.061048  115621 ssh_runner.go:195] Run: ls
	I0429 13:07:51.065768  115621 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0429 13:07:51.071237  115621 api_server.go:279] https://192.168.39.183:8443/healthz returned 200:
	ok
	I0429 13:07:51.071259  115621 status.go:422] multinode-334446 apiserver status = Running (err=<nil>)
	I0429 13:07:51.071269  115621 status.go:257] multinode-334446 status: &{Name:multinode-334446 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 13:07:51.071285  115621 status.go:255] checking status of multinode-334446-m02 ...
	I0429 13:07:51.071575  115621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 13:07:51.071625  115621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:07:51.087012  115621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43827
	I0429 13:07:51.087394  115621 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:07:51.087857  115621 main.go:141] libmachine: Using API Version  1
	I0429 13:07:51.087881  115621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:07:51.088207  115621 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:07:51.088394  115621 main.go:141] libmachine: (multinode-334446-m02) Calling .GetState
	I0429 13:07:51.089927  115621 status.go:330] multinode-334446-m02 host status = "Running" (err=<nil>)
	I0429 13:07:51.089945  115621 host.go:66] Checking if "multinode-334446-m02" exists ...
	I0429 13:07:51.090228  115621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 13:07:51.090260  115621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:07:51.105007  115621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33433
	I0429 13:07:51.105394  115621 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:07:51.105827  115621 main.go:141] libmachine: Using API Version  1
	I0429 13:07:51.105851  115621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:07:51.106134  115621 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:07:51.106299  115621 main.go:141] libmachine: (multinode-334446-m02) Calling .GetIP
	I0429 13:07:51.108731  115621 main.go:141] libmachine: (multinode-334446-m02) DBG | domain multinode-334446-m02 has defined MAC address 52:54:00:8f:08:e9 in network mk-multinode-334446
	I0429 13:07:51.109183  115621 main.go:141] libmachine: (multinode-334446-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:08:e9", ip: ""} in network mk-multinode-334446: {Iface:virbr1 ExpiryTime:2024-04-29 14:06:25 +0000 UTC Type:0 Mac:52:54:00:8f:08:e9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:multinode-334446-m02 Clientid:01:52:54:00:8f:08:e9}
	I0429 13:07:51.109215  115621 main.go:141] libmachine: (multinode-334446-m02) DBG | domain multinode-334446-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:8f:08:e9 in network mk-multinode-334446
	I0429 13:07:51.109359  115621 host.go:66] Checking if "multinode-334446-m02" exists ...
	I0429 13:07:51.109776  115621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 13:07:51.109821  115621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:07:51.124720  115621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40123
	I0429 13:07:51.125151  115621 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:07:51.125636  115621 main.go:141] libmachine: Using API Version  1
	I0429 13:07:51.125662  115621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:07:51.125967  115621 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:07:51.126156  115621 main.go:141] libmachine: (multinode-334446-m02) Calling .DriverName
	I0429 13:07:51.126344  115621 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 13:07:51.126365  115621 main.go:141] libmachine: (multinode-334446-m02) Calling .GetSSHHostname
	I0429 13:07:51.129002  115621 main.go:141] libmachine: (multinode-334446-m02) DBG | domain multinode-334446-m02 has defined MAC address 52:54:00:8f:08:e9 in network mk-multinode-334446
	I0429 13:07:51.129350  115621 main.go:141] libmachine: (multinode-334446-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:08:e9", ip: ""} in network mk-multinode-334446: {Iface:virbr1 ExpiryTime:2024-04-29 14:06:25 +0000 UTC Type:0 Mac:52:54:00:8f:08:e9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:multinode-334446-m02 Clientid:01:52:54:00:8f:08:e9}
	I0429 13:07:51.129372  115621 main.go:141] libmachine: (multinode-334446-m02) DBG | domain multinode-334446-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:8f:08:e9 in network mk-multinode-334446
	I0429 13:07:51.129495  115621 main.go:141] libmachine: (multinode-334446-m02) Calling .GetSSHPort
	I0429 13:07:51.129687  115621 main.go:141] libmachine: (multinode-334446-m02) Calling .GetSSHKeyPath
	I0429 13:07:51.129828  115621 main.go:141] libmachine: (multinode-334446-m02) Calling .GetSSHUsername
	I0429 13:07:51.129973  115621 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18771-82690/.minikube/machines/multinode-334446-m02/id_rsa Username:docker}
	I0429 13:07:51.215949  115621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 13:07:51.231798  115621 status.go:257] multinode-334446-m02 status: &{Name:multinode-334446-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0429 13:07:51.231835  115621 status.go:255] checking status of multinode-334446-m03 ...
	I0429 13:07:51.232210  115621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 13:07:51.232260  115621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:07:51.247427  115621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38203
	I0429 13:07:51.247953  115621 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:07:51.248430  115621 main.go:141] libmachine: Using API Version  1
	I0429 13:07:51.248456  115621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:07:51.248779  115621 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:07:51.248955  115621 main.go:141] libmachine: (multinode-334446-m03) Calling .GetState
	I0429 13:07:51.250637  115621 status.go:330] multinode-334446-m03 host status = "Stopped" (err=<nil>)
	I0429 13:07:51.250652  115621 status.go:343] host is not running, skipping remaining checks
	I0429 13:07:51.250659  115621 status.go:257] multinode-334446-m03 status: &{Name:multinode-334446-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.36s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (27.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 node start m03 -v=7 --alsologtostderr
E0429 13:08:05.071125   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-334446 node start m03 -v=7 --alsologtostderr: (26.608677551s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (27.26s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (295.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-334446
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-334446
E0429 13:10:40.868260   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/functional-955425/client.crt: no such file or directory
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-334446: (3m5.491023354s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-334446 --wait=true -v=8 --alsologtostderr
E0429 13:13:05.069612   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-334446 --wait=true -v=8 --alsologtostderr: (1m50.177945565s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-334446
--- PASS: TestMultiNode/serial/RestartKeepsNodes (295.78s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-334446 node delete m03: (1.841639553s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.42s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (184.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 stop
E0429 13:13:43.914143   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/functional-955425/client.crt: no such file or directory
E0429 13:15:40.868634   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/functional-955425/client.crt: no such file or directory
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-334446 stop: (3m3.921393902s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-334446 status: exit status 7 (93.172284ms)

                                                
                                                
-- stdout --
	multinode-334446
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-334446-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-334446 status --alsologtostderr: exit status 7 (92.814295ms)

                                                
                                                
-- stdout --
	multinode-334446
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-334446-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 13:16:20.786562  118258 out.go:291] Setting OutFile to fd 1 ...
	I0429 13:16:20.786686  118258 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:16:20.786695  118258 out.go:304] Setting ErrFile to fd 2...
	I0429 13:16:20.786699  118258 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:16:20.786907  118258 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18771-82690/.minikube/bin
	I0429 13:16:20.787058  118258 out.go:298] Setting JSON to false
	I0429 13:16:20.787085  118258 mustload.go:65] Loading cluster: multinode-334446
	I0429 13:16:20.787128  118258 notify.go:220] Checking for updates...
	I0429 13:16:20.787451  118258 config.go:182] Loaded profile config "multinode-334446": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0429 13:16:20.787466  118258 status.go:255] checking status of multinode-334446 ...
	I0429 13:16:20.787897  118258 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 13:16:20.787950  118258 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:16:20.805344  118258 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43401
	I0429 13:16:20.805716  118258 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:16:20.806277  118258 main.go:141] libmachine: Using API Version  1
	I0429 13:16:20.806313  118258 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:16:20.806729  118258 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:16:20.807002  118258 main.go:141] libmachine: (multinode-334446) Calling .GetState
	I0429 13:16:20.808701  118258 status.go:330] multinode-334446 host status = "Stopped" (err=<nil>)
	I0429 13:16:20.808713  118258 status.go:343] host is not running, skipping remaining checks
	I0429 13:16:20.808719  118258 status.go:257] multinode-334446 status: &{Name:multinode-334446 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 13:16:20.808754  118258 status.go:255] checking status of multinode-334446-m02 ...
	I0429 13:16:20.809032  118258 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 13:16:20.809075  118258 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 13:16:20.823263  118258 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35861
	I0429 13:16:20.823611  118258 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:16:20.824038  118258 main.go:141] libmachine: Using API Version  1
	I0429 13:16:20.824057  118258 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:16:20.824341  118258 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:16:20.824505  118258 main.go:141] libmachine: (multinode-334446-m02) Calling .GetState
	I0429 13:16:20.825870  118258 status.go:330] multinode-334446-m02 host status = "Stopped" (err=<nil>)
	I0429 13:16:20.825886  118258 status.go:343] host is not running, skipping remaining checks
	I0429 13:16:20.825892  118258 status.go:257] multinode-334446-m02 status: &{Name:multinode-334446-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (184.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (79.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-334446 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-334446 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m18.610684487s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334446 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (79.16s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (46.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-334446
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-334446-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-334446-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (74.046832ms)

                                                
                                                
-- stdout --
	* [multinode-334446-m02] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18771
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18771-82690/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18771-82690/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-334446-m02' is duplicated with machine name 'multinode-334446-m02' in profile 'multinode-334446'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-334446-m03 --driver=kvm2  --container-runtime=containerd
E0429 13:18:05.069143   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-334446-m03 --driver=kvm2  --container-runtime=containerd: (45.026197845s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-334446
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-334446: exit status 80 (232.656286ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-334446 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-334446-m03 already exists in multinode-334446-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-334446-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-334446-m03: (1.012920013s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (46.40s)

                                                
                                    
x
+
TestPreload (398.83s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-905724 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0429 13:20:40.868309   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/functional-955425/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-905724 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (2m24.80950773s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-905724 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-905724 image pull gcr.io/k8s-minikube/busybox: (3.045978566s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-905724
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-905724: (1m31.779557828s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-905724 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
E0429 13:22:48.116679   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt: no such file or directory
E0429 13:23:05.069133   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-905724 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (2m38.088429572s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-905724 image list
helpers_test.go:175: Cleaning up "test-preload-905724" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-905724
--- PASS: TestPreload (398.83s)

                                                
                                    
x
+
TestScheduledStopUnix (116.96s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-924582 --memory=2048 --driver=kvm2  --container-runtime=containerd
E0429 13:25:40.867812   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/functional-955425/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-924582 --memory=2048 --driver=kvm2  --container-runtime=containerd: (45.236927851s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-924582 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-924582 -n scheduled-stop-924582
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-924582 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-924582 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-924582 -n scheduled-stop-924582
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-924582
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-924582 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-924582
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-924582: exit status 7 (86.683351ms)

                                                
                                                
-- stdout --
	scheduled-stop-924582
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-924582 -n scheduled-stop-924582
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-924582 -n scheduled-stop-924582: exit status 7 (75.532891ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-924582" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-924582
--- PASS: TestScheduledStopUnix (116.96s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (191.32s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3467826174 start -p running-upgrade-047202 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3467826174 start -p running-upgrade-047202 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m25.063910638s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-047202 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-047202 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m41.792060959s)
helpers_test.go:175: Cleaning up "running-upgrade-047202" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-047202
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-047202: (1.204416447s)
--- PASS: TestRunningBinaryUpgrade (191.32s)

                                                
                                    
x
+
TestKubernetesUpgrade (245.61s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-906388 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-906388 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (2m8.462489148s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-906388
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-906388: (2.351420892s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-906388 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-906388 status --format={{.Host}}: exit status 7 (88.253341ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-906388 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-906388 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (55.98734217s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-906388 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-906388 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-906388 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (102.898116ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-906388] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18771
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18771-82690/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18771-82690/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-906388
	    minikube start -p kubernetes-upgrade-906388 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9063882 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0, by running:
	    
	    minikube start -p kubernetes-upgrade-906388 --kubernetes-version=v1.30.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-906388 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0429 13:30:23.915298   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/functional-955425/client.crt: no such file or directory
E0429 13:30:40.868593   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/functional-955425/client.crt: no such file or directory
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-906388 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (57.634818653s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-906388" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-906388
--- PASS: TestKubernetesUpgrade (245.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-568915 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-568915 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (99.529992ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-568915] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18771
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18771-82690/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18771-82690/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (102.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-568915 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-568915 --driver=kvm2  --container-runtime=containerd: (1m41.869591222s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-568915 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (102.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-705606 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-705606 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (126.463187ms)

                                                
                                                
-- stdout --
	* [false-705606] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18771
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18771-82690/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18771-82690/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 13:27:07.043221  123592 out.go:291] Setting OutFile to fd 1 ...
	I0429 13:27:07.043337  123592 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:27:07.043346  123592 out.go:304] Setting ErrFile to fd 2...
	I0429 13:27:07.043349  123592 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:27:07.043569  123592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18771-82690/.minikube/bin
	I0429 13:27:07.044184  123592 out.go:298] Setting JSON to false
	I0429 13:27:07.045054  123592 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":11371,"bootTime":1714385856,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 13:27:07.045116  123592 start.go:139] virtualization: kvm guest
	I0429 13:27:07.047405  123592 out.go:177] * [false-705606] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 13:27:07.048702  123592 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 13:27:07.048719  123592 notify.go:220] Checking for updates...
	I0429 13:27:07.049881  123592 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 13:27:07.051292  123592 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18771-82690/kubeconfig
	I0429 13:27:07.052621  123592 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18771-82690/.minikube
	I0429 13:27:07.054028  123592 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 13:27:07.055437  123592 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 13:27:07.057072  123592 config.go:182] Loaded profile config "NoKubernetes-568915": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0429 13:27:07.057215  123592 config.go:182] Loaded profile config "force-systemd-env-579986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0429 13:27:07.057316  123592 config.go:182] Loaded profile config "offline-containerd-525758": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0429 13:27:07.057412  123592 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 13:27:07.092720  123592 out.go:177] * Using the kvm2 driver based on user configuration
	I0429 13:27:07.094154  123592 start.go:297] selected driver: kvm2
	I0429 13:27:07.094175  123592 start.go:901] validating driver "kvm2" against <nil>
	I0429 13:27:07.094188  123592 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 13:27:07.096504  123592 out.go:177] 
	W0429 13:27:07.097878  123592 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0429 13:27:07.099224  123592 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-705606 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-705606

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-705606

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-705606

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-705606

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-705606

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-705606

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-705606

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-705606

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-705606

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-705606

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-705606"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-705606"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-705606"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-705606

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-705606"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-705606"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-705606" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-705606" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-705606" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-705606" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-705606" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-705606" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-705606" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-705606" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-705606"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-705606"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-705606"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-705606"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-705606"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-705606" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-705606" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-705606" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-705606"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-705606"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-705606"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-705606"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-705606"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-705606

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-705606"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-705606"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-705606"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-705606"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-705606"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-705606"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-705606"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-705606"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-705606"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-705606"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-705606"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-705606"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-705606"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-705606"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-705606"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-705606"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-705606"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-705606"

                                                
                                                
----------------------- debugLogs end: false-705606 [took: 3.100001095s] --------------------------------
helpers_test.go:175: Cleaning up "false-705606" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-705606
--- PASS: TestNetworkPlugins/group/false (3.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (216.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2757302501 start -p stopped-upgrade-632627 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E0429 13:28:05.068906   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2757302501 start -p stopped-upgrade-632627 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m44.427760447s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2757302501 -p stopped-upgrade-632627 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2757302501 -p stopped-upgrade-632627 stop: (1.56347337s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-632627 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-632627 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m50.965282442s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (216.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (67.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-568915 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-568915 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (1m6.42347596s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-568915 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-568915 status -o json: exit status 2 (238.630311ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-568915","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-568915
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (67.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (59.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-568915 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-568915 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (59.126453386s)
--- PASS: TestNoKubernetes/serial/Start (59.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-568915 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-568915 "sudo systemctl is-active --quiet service kubelet": exit status 1 (222.129126ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-568915
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-568915: (2.073610904s)
--- PASS: TestNoKubernetes/serial/Stop (2.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (26.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-568915 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-568915 --driver=kvm2  --container-runtime=containerd: (26.735730907s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (26.74s)

                                                
                                    
x
+
TestPause/serial/Start (64.99s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-026009 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-026009 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m4.993380547s)
--- PASS: TestPause/serial/Start (64.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-568915 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-568915 "sudo systemctl is-active --quiet service kubelet": exit status 1 (212.491082ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-632627
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-632627: (1.082838545s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.08s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (75.62s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-026009 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-026009 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m15.586027431s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (75.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (67.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-705606 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
E0429 13:33:05.069877   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-705606 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m7.471676792s)
--- PASS: TestNetworkPlugins/group/auto/Start (67.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (70.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-705606 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-705606 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m10.970746756s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (70.97s)

                                                
                                    
x
+
TestPause/serial/Pause (0.88s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-026009 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.88s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-026009 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-026009 --output=json --layout=cluster: exit status 2 (307.350311ms)

                                                
                                                
-- stdout --
	{"Name":"pause-026009","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-026009","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-026009 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.68s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.89s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-026009 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.89s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.06s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-026009 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-026009 --alsologtostderr -v=5: (1.055881935s)
--- PASS: TestPause/serial/DeletePaused (1.06s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (1.13s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.128864995s)
--- PASS: TestPause/serial/VerifyDeletedResources (1.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (106.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-705606 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-705606 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m46.903837234s)
--- PASS: TestNetworkPlugins/group/calico/Start (106.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-705606 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-705606 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-h7jrm" [4db4dd8a-f7ca-448c-b9da-e36f5573a69d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-h7jrm" [4db4dd8a-f7ca-448c-b9da-e36f5573a69d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.005537445s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-705606 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-705606 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-705606 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (87.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-705606 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-705606 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m27.966887541s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (87.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-lgmv2" [554fab10-91a0-4c5a-befc-e85c2fc4fe24] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005655664s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-705606 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-705606 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-5xlvd" [134c5322-47aa-43ef-bb57-782f09079387] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-5xlvd" [134c5322-47aa-43ef-bb57-782f09079387] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.005048233s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-705606 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-705606 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-705606 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (106.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-705606 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-705606 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m46.797429601s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (106.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-6tt87" [5a83b6e2-c3f9-4586-9ed7-2a73ba423579] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005957344s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-705606 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-705606 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-bbmjx" [a3eb006d-36fc-4f7e-b8b4-a82a7e127997] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0429 13:35:40.867750   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/functional-955425/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-bbmjx" [a3eb006d-36fc-4f7e-b8b4-a82a7e127997] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.006343737s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-705606 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-705606 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-705606 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (86.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-705606 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-705606 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m26.042660818s)
--- PASS: TestNetworkPlugins/group/flannel/Start (86.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-705606 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-705606 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-kdgb5" [8f58c092-d8f9-4f26-80bc-eeb5bf7388a6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-kdgb5" [8f58c092-d8f9-4f26-80bc-eeb5bf7388a6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004282295s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-705606 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-705606 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-705606 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (78.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-705606 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-705606 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m18.64951805s)
--- PASS: TestNetworkPlugins/group/bridge/Start (78.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (175.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-779470 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-779470 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m55.016399991s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (175.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-705606 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-705606 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-99tdh" [7d4b39c8-e0ad-49fe-b14a-ef6a5b056df5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-99tdh" [7d4b39c8-e0ad-49fe-b14a-ef6a5b056df5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004849992s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-705606 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-705606 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-705606 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-r6ccn" [d5cb8192-fc5d-427a-8e71-87b8501e0d98] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.074835918s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (117.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-948050 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-948050 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0: (1m57.979056355s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (117.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-705606 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-705606 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context flannel-705606 replace --force -f testdata/netcat-deployment.yaml: (1.108884854s)
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-wv45m" [2cce6a45-fa85-4dcc-9982-a509e85f98e3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-wv45m" [2cce6a45-fa85-4dcc-9982-a509e85f98e3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.005118136s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-705606 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-705606 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-6gj2t" [d713fa1c-c3bd-4243-bb3d-0b84c81ab548] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-6gj2t" [d713fa1c-c3bd-4243-bb3d-0b84c81ab548] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.005441871s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-705606 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-705606 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-705606 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-705606 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-705606 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-705606 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)
E0429 13:46:27.902372   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/custom-flannel-705606/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (66.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-863169 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-863169 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0: (1m6.699062555s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (66.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (128.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-827786 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0
E0429 13:38:05.068883   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-827786 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0: (2m8.84787887s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (128.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-863169 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [389ab8ae-0eb4-4f7e-9388-7d6ce8a9d53e] Pending
helpers_test.go:344: "busybox" [389ab8ae-0eb4-4f7e-9388-7d6ce8a9d53e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0429 13:39:05.069239   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/auto-705606/client.crt: no such file or directory
E0429 13:39:05.074614   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/auto-705606/client.crt: no such file or directory
E0429 13:39:05.084857   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/auto-705606/client.crt: no such file or directory
E0429 13:39:05.105187   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/auto-705606/client.crt: no such file or directory
E0429 13:39:05.145539   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/auto-705606/client.crt: no such file or directory
E0429 13:39:05.225968   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/auto-705606/client.crt: no such file or directory
E0429 13:39:05.386747   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/auto-705606/client.crt: no such file or directory
E0429 13:39:05.707357   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/auto-705606/client.crt: no such file or directory
helpers_test.go:344: "busybox" [389ab8ae-0eb4-4f7e-9388-7d6ce8a9d53e] Running
E0429 13:39:06.348199   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/auto-705606/client.crt: no such file or directory
E0429 13:39:07.628568   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/auto-705606/client.crt: no such file or directory
E0429 13:39:10.188814   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/auto-705606/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.006742575s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-863169 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-863169 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-863169 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.048413905s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-863169 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (92.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-863169 --alsologtostderr -v=3
E0429 13:39:15.309406   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/auto-705606/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-863169 --alsologtostderr -v=3: (1m32.457589554s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (92.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-948050 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [30c25d42-52e1-47c1-9366-89efa492687a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [30c25d42-52e1-47c1-9366-89efa492687a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004592763s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-948050 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-779470 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [db77b00e-dba6-4a3e-94ad-d93358a97ca4] Pending
helpers_test.go:344: "busybox" [db77b00e-dba6-4a3e-94ad-d93358a97ca4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0429 13:39:25.550133   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/auto-705606/client.crt: no such file or directory
helpers_test.go:344: "busybox" [db77b00e-dba6-4a3e-94ad-d93358a97ca4] Running
E0429 13:39:28.117726   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.00472463s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-779470 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-948050 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-948050 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (92.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-948050 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-948050 --alsologtostderr -v=3: (1m32.539419805s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (92.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-779470 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-779470 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (92.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-779470 --alsologtostderr -v=3
E0429 13:39:34.757832   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/kindnet-705606/client.crt: no such file or directory
E0429 13:39:34.763121   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/kindnet-705606/client.crt: no such file or directory
E0429 13:39:34.773388   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/kindnet-705606/client.crt: no such file or directory
E0429 13:39:34.793739   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/kindnet-705606/client.crt: no such file or directory
E0429 13:39:34.834055   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/kindnet-705606/client.crt: no such file or directory
E0429 13:39:34.915041   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/kindnet-705606/client.crt: no such file or directory
E0429 13:39:35.075726   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/kindnet-705606/client.crt: no such file or directory
E0429 13:39:35.395886   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/kindnet-705606/client.crt: no such file or directory
E0429 13:39:36.036588   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/kindnet-705606/client.crt: no such file or directory
E0429 13:39:37.316889   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/kindnet-705606/client.crt: no such file or directory
E0429 13:39:39.877366   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/kindnet-705606/client.crt: no such file or directory
E0429 13:39:44.998580   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/kindnet-705606/client.crt: no such file or directory
E0429 13:39:46.031235   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/auto-705606/client.crt: no such file or directory
E0429 13:39:55.239264   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/kindnet-705606/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-779470 --alsologtostderr -v=3: (1m32.506897065s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (92.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-827786 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [792d0ef1-2e25-4158-a0b2-19ecb8022c14] Pending
helpers_test.go:344: "busybox" [792d0ef1-2e25-4158-a0b2-19ecb8022c14] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [792d0ef1-2e25-4158-a0b2-19ecb8022c14] Running
E0429 13:40:15.720155   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/kindnet-705606/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004896116s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-827786 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-827786 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-827786 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.083603142s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-827786 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (92.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-827786 --alsologtostderr -v=3
E0429 13:40:26.991789   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/auto-705606/client.crt: no such file or directory
E0429 13:40:32.070807   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/calico-705606/client.crt: no such file or directory
E0429 13:40:32.076116   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/calico-705606/client.crt: no such file or directory
E0429 13:40:32.086433   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/calico-705606/client.crt: no such file or directory
E0429 13:40:32.106747   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/calico-705606/client.crt: no such file or directory
E0429 13:40:32.147114   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/calico-705606/client.crt: no such file or directory
E0429 13:40:32.227558   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/calico-705606/client.crt: no such file or directory
E0429 13:40:32.388040   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/calico-705606/client.crt: no such file or directory
E0429 13:40:32.708718   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/calico-705606/client.crt: no such file or directory
E0429 13:40:33.349364   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/calico-705606/client.crt: no such file or directory
E0429 13:40:34.629990   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/calico-705606/client.crt: no such file or directory
E0429 13:40:37.190202   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/calico-705606/client.crt: no such file or directory
E0429 13:40:40.868396   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/functional-955425/client.crt: no such file or directory
E0429 13:40:42.310494   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/calico-705606/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-827786 --alsologtostderr -v=3: (1m32.517086178s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (92.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-863169 -n embed-certs-863169
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-863169 -n embed-certs-863169: exit status 7 (81.27047ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-863169 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (297.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-863169 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0
E0429 13:40:52.550938   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/calico-705606/client.crt: no such file or directory
E0429 13:40:56.681157   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/kindnet-705606/client.crt: no such file or directory
E0429 13:41:00.218662   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/custom-flannel-705606/client.crt: no such file or directory
E0429 13:41:00.223930   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/custom-flannel-705606/client.crt: no such file or directory
E0429 13:41:00.234272   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/custom-flannel-705606/client.crt: no such file or directory
E0429 13:41:00.254575   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/custom-flannel-705606/client.crt: no such file or directory
E0429 13:41:00.294915   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/custom-flannel-705606/client.crt: no such file or directory
E0429 13:41:00.375293   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/custom-flannel-705606/client.crt: no such file or directory
E0429 13:41:00.535641   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/custom-flannel-705606/client.crt: no such file or directory
E0429 13:41:00.855838   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/custom-flannel-705606/client.crt: no such file or directory
E0429 13:41:01.496376   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/custom-flannel-705606/client.crt: no such file or directory
E0429 13:41:02.777145   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/custom-flannel-705606/client.crt: no such file or directory
E0429 13:41:05.338052   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/custom-flannel-705606/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-863169 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0: (4m56.953461906s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-863169 -n embed-certs-863169
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (297.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-948050 -n no-preload-948050
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-948050 -n no-preload-948050: exit status 7 (92.046107ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-948050 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (298.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-948050 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-948050 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0: (4m58.569212568s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-948050 -n no-preload-948050
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (298.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-779470 -n old-k8s-version-779470
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-779470 -n old-k8s-version-779470: exit status 7 (86.596601ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-779470 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (550.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-779470 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
E0429 13:41:10.459111   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/custom-flannel-705606/client.crt: no such file or directory
E0429 13:41:13.032087   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/calico-705606/client.crt: no such file or directory
E0429 13:41:20.699489   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/custom-flannel-705606/client.crt: no such file or directory
E0429 13:41:41.179921   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/custom-flannel-705606/client.crt: no such file or directory
E0429 13:41:48.912131   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/auto-705606/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-779470 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (9m10.321979329s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-779470 -n old-k8s-version-779470
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (550.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-827786 -n default-k8s-diff-port-827786
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-827786 -n default-k8s-diff-port-827786: exit status 7 (78.380893ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-827786 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (300.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-827786 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0
E0429 13:41:53.992890   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/calico-705606/client.crt: no such file or directory
E0429 13:41:56.896821   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/enable-default-cni-705606/client.crt: no such file or directory
E0429 13:41:56.902144   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/enable-default-cni-705606/client.crt: no such file or directory
E0429 13:41:56.912496   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/enable-default-cni-705606/client.crt: no such file or directory
E0429 13:41:56.932806   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/enable-default-cni-705606/client.crt: no such file or directory
E0429 13:41:56.973191   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/enable-default-cni-705606/client.crt: no such file or directory
E0429 13:41:57.053535   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/enable-default-cni-705606/client.crt: no such file or directory
E0429 13:41:57.213973   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/enable-default-cni-705606/client.crt: no such file or directory
E0429 13:41:57.534570   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/enable-default-cni-705606/client.crt: no such file or directory
E0429 13:41:58.174791   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/enable-default-cni-705606/client.crt: no such file or directory
E0429 13:41:59.455744   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/enable-default-cni-705606/client.crt: no such file or directory
E0429 13:42:02.016409   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/enable-default-cni-705606/client.crt: no such file or directory
E0429 13:42:07.136812   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/enable-default-cni-705606/client.crt: no such file or directory
E0429 13:42:17.376941   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/enable-default-cni-705606/client.crt: no such file or directory
E0429 13:42:18.602255   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/kindnet-705606/client.crt: no such file or directory
E0429 13:42:19.453583   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/flannel-705606/client.crt: no such file or directory
E0429 13:42:19.458889   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/flannel-705606/client.crt: no such file or directory
E0429 13:42:19.469186   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/flannel-705606/client.crt: no such file or directory
E0429 13:42:19.489523   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/flannel-705606/client.crt: no such file or directory
E0429 13:42:19.529941   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/flannel-705606/client.crt: no such file or directory
E0429 13:42:19.610337   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/flannel-705606/client.crt: no such file or directory
E0429 13:42:19.770540   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/flannel-705606/client.crt: no such file or directory
E0429 13:42:20.090997   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/flannel-705606/client.crt: no such file or directory
E0429 13:42:20.731805   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/flannel-705606/client.crt: no such file or directory
E0429 13:42:22.012307   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/flannel-705606/client.crt: no such file or directory
E0429 13:42:22.140831   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/custom-flannel-705606/client.crt: no such file or directory
E0429 13:42:24.572897   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/flannel-705606/client.crt: no such file or directory
E0429 13:42:29.360977   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/bridge-705606/client.crt: no such file or directory
E0429 13:42:29.366317   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/bridge-705606/client.crt: no such file or directory
E0429 13:42:29.376592   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/bridge-705606/client.crt: no such file or directory
E0429 13:42:29.396965   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/bridge-705606/client.crt: no such file or directory
E0429 13:42:29.437471   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/bridge-705606/client.crt: no such file or directory
E0429 13:42:29.517837   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/bridge-705606/client.crt: no such file or directory
E0429 13:42:29.678377   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/bridge-705606/client.crt: no such file or directory
E0429 13:42:29.693621   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/flannel-705606/client.crt: no such file or directory
E0429 13:42:29.999063   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/bridge-705606/client.crt: no such file or directory
E0429 13:42:30.640128   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/bridge-705606/client.crt: no such file or directory
E0429 13:42:31.920278   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/bridge-705606/client.crt: no such file or directory
E0429 13:42:34.481422   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/bridge-705606/client.crt: no such file or directory
E0429 13:42:37.857818   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/enable-default-cni-705606/client.crt: no such file or directory
E0429 13:42:39.601591   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/bridge-705606/client.crt: no such file or directory
E0429 13:42:39.934337   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/flannel-705606/client.crt: no such file or directory
E0429 13:42:49.841855   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/bridge-705606/client.crt: no such file or directory
E0429 13:43:00.415218   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/flannel-705606/client.crt: no such file or directory
E0429 13:43:05.069205   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/addons-051772/client.crt: no such file or directory
E0429 13:43:10.322485   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/bridge-705606/client.crt: no such file or directory
E0429 13:43:15.913459   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/calico-705606/client.crt: no such file or directory
E0429 13:43:18.817984   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/enable-default-cni-705606/client.crt: no such file or directory
E0429 13:43:41.375738   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/flannel-705606/client.crt: no such file or directory
E0429 13:43:44.061686   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/custom-flannel-705606/client.crt: no such file or directory
E0429 13:43:51.283607   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/bridge-705606/client.crt: no such file or directory
E0429 13:44:05.069287   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/auto-705606/client.crt: no such file or directory
E0429 13:44:32.752682   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/auto-705606/client.crt: no such file or directory
E0429 13:44:34.757397   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/kindnet-705606/client.crt: no such file or directory
E0429 13:44:40.739095   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/enable-default-cni-705606/client.crt: no such file or directory
E0429 13:45:02.443441   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/kindnet-705606/client.crt: no such file or directory
E0429 13:45:03.296756   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/flannel-705606/client.crt: no such file or directory
E0429 13:45:13.204542   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/bridge-705606/client.crt: no such file or directory
E0429 13:45:32.070424   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/calico-705606/client.crt: no such file or directory
E0429 13:45:40.868461   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/functional-955425/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-827786 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0: (5m0.715134239s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-827786 -n default-k8s-diff-port-827786
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (300.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-khtbq" [522d2254-0c41-4057-a5ff-c03ad673ce60] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004878857s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-khtbq" [522d2254-0c41-4057-a5ff-c03ad673ce60] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004190236s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-863169 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-863169 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-863169 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-863169 -n embed-certs-863169
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-863169 -n embed-certs-863169: exit status 2 (266.228282ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-863169 -n embed-certs-863169
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-863169 -n embed-certs-863169: exit status 2 (263.663385ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-863169 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-863169 -n embed-certs-863169
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-863169 -n embed-certs-863169
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (59.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-087810 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0
E0429 13:45:59.753805   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/calico-705606/client.crt: no such file or directory
E0429 13:46:00.218294   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/custom-flannel-705606/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-087810 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0: (59.209583588s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (59.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-nhdfb" [0739f8d9-35da-4833-be7f-8aac1dc0ca4c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004803097s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-nhdfb" [0739f8d9-35da-4833-be7f-8aac1dc0ca4c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006375497s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-948050 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-948050 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-948050 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-948050 -n no-preload-948050
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-948050 -n no-preload-948050: exit status 2 (274.781474ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-948050 -n no-preload-948050
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-948050 -n no-preload-948050: exit status 2 (270.500557ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-948050 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-948050 -n no-preload-948050
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-948050 -n no-preload-948050
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-7xlfn" [411a7c5e-de01-4f91-989c-6a4d17965b5d] Running
E0429 13:46:56.895782   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/enable-default-cni-705606/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005149514s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-087810 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-087810 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.385679891s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-087810 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-087810 --alsologtostderr -v=3: (2.338717803s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-7xlfn" [411a7c5e-de01-4f91-989c-6a4d17965b5d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004122046s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-827786 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-087810 -n newest-cni-087810
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-087810 -n newest-cni-087810: exit status 7 (76.854835ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-087810 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (33.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-087810 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0
E0429 13:47:03.915548   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/functional-955425/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-087810 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0: (32.807555295s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-087810 -n newest-cni-087810
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (33.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-827786 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-827786 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-827786 -n default-k8s-diff-port-827786
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-827786 -n default-k8s-diff-port-827786: exit status 2 (274.16065ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-827786 -n default-k8s-diff-port-827786
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-827786 -n default-k8s-diff-port-827786: exit status 2 (272.58736ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-827786 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-827786 -n default-k8s-diff-port-827786
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-827786 -n default-k8s-diff-port-827786
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-087810 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-087810 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-087810 -n newest-cni-087810
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-087810 -n newest-cni-087810: exit status 2 (248.364976ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-087810 -n newest-cni-087810
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-087810 -n newest-cni-087810: exit status 2 (247.838163ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-087810 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-087810 -n newest-cni-087810
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-087810 -n newest-cni-087810
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-lqb4h" [1b0df775-87e1-4181-8319-d8b2e427f3b0] Running
E0429 13:50:17.598580   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/default-k8s-diff-port-827786/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004519355s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-lqb4h" [1b0df775-87e1-4181-8319-d8b2e427f3b0] Running
E0429 13:50:27.838880   90027 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-82690/.minikube/profiles/default-k8s-diff-port-827786/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005655403s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-779470 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-779470 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-779470 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-779470 -n old-k8s-version-779470
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-779470 -n old-k8s-version-779470: exit status 2 (246.289524ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-779470 -n old-k8s-version-779470
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-779470 -n old-k8s-version-779470: exit status 2 (256.609455ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-779470 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-779470 -n old-k8s-version-779470
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-779470 -n old-k8s-version-779470
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.64s)

                                                
                                    

Test skip (36/325)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.0/cached-images 0
15 TestDownloadOnly/v1.30.0/binaries 0
16 TestDownloadOnly/v1.30.0/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Olm 0
47 TestDockerFlags 0
50 TestDockerEnvContainerd 0
52 TestHyperKitDriverInstallOrUpdate 0
53 TestHyperkitDriverSkipUpgrade 0
104 TestFunctional/parallel/DockerEnv 0
105 TestFunctional/parallel/PodmanEnv 0
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
114 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
115 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
116 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
117 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
118 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
119 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
153 TestGvisorAddon 0
175 TestImageBuild 0
202 TestKicCustomNetwork 0
203 TestKicExistingNetwork 0
204 TestKicCustomSubnet 0
205 TestKicStaticIP 0
237 TestChangeNoneUser 0
240 TestScheduledStopWindows 0
242 TestSkaffold 0
244 TestInsufficientStorage 0
248 TestMissingContainerUpgrade 0
252 TestNetworkPlugins/group/kubenet 3.24
262 TestNetworkPlugins/group/cilium 3.5
276 TestStartStop/group/disable-driver-mounts 0.16
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-705606 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-705606

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-705606

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-705606

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-705606

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-705606

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-705606

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-705606

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-705606

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-705606

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-705606

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-705606"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-705606"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-705606"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-705606

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-705606"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-705606"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-705606" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-705606" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-705606" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-705606" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-705606" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-705606" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-705606" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-705606" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-705606"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-705606"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-705606"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-705606"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-705606"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-705606" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-705606" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-705606" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-705606"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-705606"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-705606"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-705606"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-705606"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-705606

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-705606"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-705606"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-705606"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-705606"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-705606"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-705606"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-705606"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-705606"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-705606"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-705606"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-705606"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-705606"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-705606"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-705606"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-705606"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-705606"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-705606"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-705606"

                                                
                                                
----------------------- debugLogs end: kubenet-705606 [took: 3.096460061s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-705606" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-705606
--- SKIP: TestNetworkPlugins/group/kubenet (3.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-705606 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-705606

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-705606

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-705606

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-705606

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-705606

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-705606

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-705606

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-705606

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-705606

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-705606

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705606"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705606"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705606"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-705606

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705606"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705606"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-705606" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-705606" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-705606" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-705606" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-705606" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-705606" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-705606" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-705606" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705606"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705606"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705606"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705606"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705606"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-705606

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-705606

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-705606" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-705606" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-705606

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-705606

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-705606" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-705606" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-705606" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-705606" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-705606" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705606"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705606"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705606"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705606"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705606"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-705606

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705606"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705606"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705606"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705606"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705606"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705606"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705606"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705606"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705606"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705606"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705606"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705606"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705606"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705606"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705606"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705606"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705606"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-705606" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705606"

                                                
                                                
----------------------- debugLogs end: cilium-705606 [took: 3.353667509s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-705606" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-705606
--- SKIP: TestNetworkPlugins/group/cilium (3.50s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-268767" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-268767
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard