Test Report: KVM_Linux_containerd 18375

                    
                      71179286cc00ab66370748dfc329f8d30a1d24a0:2024-03-14:33556
                    
                

Test fail (1/333)

Order failed test Duration
40 TestAddons/parallel/InspektorGadget 8.47
x
+
TestAddons/parallel/InspektorGadget (8.47s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-s8zfb" [4f795022-7f42-496f-b09b-db95b01135fd] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.006330837s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-391283
addons_test.go:841: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-391283: exit status 11 (443.501381ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-03-13T23:34:50Z" level=error msg="stat /run/containerd/runc/k8s.io/f8721277e5d8ac807cd659c09cfc9f13fb0cbddca8fa19a3176a6325245a42ef: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:842: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-391283" : exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-391283 -n addons-391283
helpers_test.go:244: <<< TestAddons/parallel/InspektorGadget FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/InspektorGadget]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-391283 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-391283 logs -n 25: (2.105463228s)
helpers_test.go:252: TestAddons/parallel/InspektorGadget logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-092824 | jenkins | v1.32.0 | 13 Mar 24 23:26 UTC |                     |
	|         | -p download-only-092824                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 13 Mar 24 23:27 UTC | 13 Mar 24 23:27 UTC |
	| delete  | -p download-only-092824                                                                     | download-only-092824 | jenkins | v1.32.0 | 13 Mar 24 23:27 UTC | 13 Mar 24 23:27 UTC |
	| start   | -o=json --download-only                                                                     | download-only-816687 | jenkins | v1.32.0 | 13 Mar 24 23:27 UTC |                     |
	|         | -p download-only-816687                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 13 Mar 24 23:28 UTC | 13 Mar 24 23:28 UTC |
	| delete  | -p download-only-816687                                                                     | download-only-816687 | jenkins | v1.32.0 | 13 Mar 24 23:28 UTC | 13 Mar 24 23:28 UTC |
	| start   | -o=json --download-only                                                                     | download-only-717922 | jenkins | v1.32.0 | 13 Mar 24 23:28 UTC |                     |
	|         | -p download-only-717922                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 13 Mar 24 23:30 UTC | 13 Mar 24 23:30 UTC |
	| delete  | -p download-only-717922                                                                     | download-only-717922 | jenkins | v1.32.0 | 13 Mar 24 23:30 UTC | 13 Mar 24 23:30 UTC |
	| delete  | -p download-only-092824                                                                     | download-only-092824 | jenkins | v1.32.0 | 13 Mar 24 23:30 UTC | 13 Mar 24 23:30 UTC |
	| delete  | -p download-only-816687                                                                     | download-only-816687 | jenkins | v1.32.0 | 13 Mar 24 23:30 UTC | 13 Mar 24 23:30 UTC |
	| delete  | -p download-only-717922                                                                     | download-only-717922 | jenkins | v1.32.0 | 13 Mar 24 23:30 UTC | 13 Mar 24 23:30 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-719631 | jenkins | v1.32.0 | 13 Mar 24 23:30 UTC |                     |
	|         | binary-mirror-719631                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:37545                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-719631                                                                     | binary-mirror-719631 | jenkins | v1.32.0 | 13 Mar 24 23:31 UTC | 13 Mar 24 23:31 UTC |
	| addons  | disable dashboard -p                                                                        | addons-391283        | jenkins | v1.32.0 | 13 Mar 24 23:31 UTC |                     |
	|         | addons-391283                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-391283        | jenkins | v1.32.0 | 13 Mar 24 23:31 UTC |                     |
	|         | addons-391283                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-391283 --wait=true                                                                | addons-391283        | jenkins | v1.32.0 | 13 Mar 24 23:31 UTC | 13 Mar 24 23:34 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-391283        | jenkins | v1.32.0 | 13 Mar 24 23:34 UTC | 13 Mar 24 23:34 UTC |
	|         | -p addons-391283                                                                            |                      |         |         |                     |                     |
	| addons  | addons-391283 addons                                                                        | addons-391283        | jenkins | v1.32.0 | 13 Mar 24 23:34 UTC | 13 Mar 24 23:34 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-391283 ssh cat                                                                       | addons-391283        | jenkins | v1.32.0 | 13 Mar 24 23:34 UTC | 13 Mar 24 23:34 UTC |
	|         | /opt/local-path-provisioner/pvc-2211f1af-7d8e-41d4-9423-4028f6871ce2_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-391283 addons disable                                                                | addons-391283        | jenkins | v1.32.0 | 13 Mar 24 23:34 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-391283 ip                                                                            | addons-391283        | jenkins | v1.32.0 | 13 Mar 24 23:34 UTC | 13 Mar 24 23:34 UTC |
	| addons  | addons-391283 addons disable                                                                | addons-391283        | jenkins | v1.32.0 | 13 Mar 24 23:34 UTC | 13 Mar 24 23:34 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-391283        | jenkins | v1.32.0 | 13 Mar 24 23:34 UTC |                     |
	|         | addons-391283                                                                               |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/13 23:31:00
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0313 23:31:00.318974   13619 out.go:291] Setting OutFile to fd 1 ...
	I0313 23:31:00.319111   13619 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:31:00.319122   13619 out.go:304] Setting ErrFile to fd 2...
	I0313 23:31:00.319128   13619 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:31:00.319317   13619 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4922/.minikube/bin
	I0313 23:31:00.319920   13619 out.go:298] Setting JSON to false
	I0313 23:31:00.320719   13619 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":805,"bootTime":1710371856,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0313 23:31:00.320776   13619 start.go:139] virtualization: kvm guest
	I0313 23:31:00.323069   13619 out.go:177] * [addons-391283] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0313 23:31:00.324643   13619 out.go:177]   - MINIKUBE_LOCATION=18375
	I0313 23:31:00.324638   13619 notify.go:220] Checking for updates...
	I0313 23:31:00.326014   13619 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0313 23:31:00.327644   13619 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18375-4922/kubeconfig
	I0313 23:31:00.329300   13619 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4922/.minikube
	I0313 23:31:00.330674   13619 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0313 23:31:00.332036   13619 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0313 23:31:00.333502   13619 driver.go:392] Setting default libvirt URI to qemu:///system
	I0313 23:31:00.363164   13619 out.go:177] * Using the kvm2 driver based on user configuration
	I0313 23:31:00.364496   13619 start.go:297] selected driver: kvm2
	I0313 23:31:00.364507   13619 start.go:901] validating driver "kvm2" against <nil>
	I0313 23:31:00.364516   13619 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0313 23:31:00.365127   13619 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0313 23:31:00.365179   13619 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18375-4922/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0313 23:31:00.378481   13619 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0313 23:31:00.378529   13619 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0313 23:31:00.378724   13619 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0313 23:31:00.378749   13619 cni.go:84] Creating CNI manager for ""
	I0313 23:31:00.378755   13619 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0313 23:31:00.378760   13619 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0313 23:31:00.378810   13619 start.go:340] cluster config:
	{Name:addons-391283 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-391283 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
ontainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0313 23:31:00.378889   13619 iso.go:125] acquiring lock: {Name:mka186e9faf028141003d89f486cb5756102cb74 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0313 23:31:00.380639   13619 out.go:177] * Starting "addons-391283" primary control-plane node in "addons-391283" cluster
	I0313 23:31:00.381913   13619 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0313 23:31:00.381943   13619 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18375-4922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I0313 23:31:00.381956   13619 cache.go:56] Caching tarball of preloaded images
	I0313 23:31:00.382050   13619 preload.go:173] Found /home/jenkins/minikube-integration/18375-4922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0313 23:31:00.382076   13619 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on containerd
	I0313 23:31:00.382460   13619 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/config.json ...
	I0313 23:31:00.382486   13619 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/config.json: {Name:mkfdc7b8aa80fe3c8f724ba6fd9da13fca5b0886 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:31:00.382628   13619 start.go:360] acquireMachinesLock for addons-391283: {Name:mk1228b1c48259cd5c51c31db75d2993212c8321 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0313 23:31:00.382675   13619 start.go:364] duration metric: took 31.752µs to acquireMachinesLock for "addons-391283"
	I0313 23:31:00.382691   13619 start.go:93] Provisioning new machine with config: &{Name:addons-391283 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:addons-391283 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0313 23:31:00.382762   13619 start.go:125] createHost starting for "" (driver="kvm2")
	I0313 23:31:00.384465   13619 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0313 23:31:00.384584   13619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:31:00.384615   13619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:31:00.397033   13619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45651
	I0313 23:31:00.397407   13619 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:31:00.397909   13619 main.go:141] libmachine: Using API Version  1
	I0313 23:31:00.397923   13619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:31:00.398214   13619 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:31:00.398408   13619 main.go:141] libmachine: (addons-391283) Calling .GetMachineName
	I0313 23:31:00.398535   13619 main.go:141] libmachine: (addons-391283) Calling .DriverName
	I0313 23:31:00.398710   13619 start.go:159] libmachine.API.Create for "addons-391283" (driver="kvm2")
	I0313 23:31:00.398739   13619 client.go:168] LocalClient.Create starting
	I0313 23:31:00.398773   13619 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18375-4922/.minikube/certs/ca.pem
	I0313 23:31:00.654173   13619 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18375-4922/.minikube/certs/cert.pem
	I0313 23:31:01.024857   13619 main.go:141] libmachine: Running pre-create checks...
	I0313 23:31:01.024881   13619 main.go:141] libmachine: (addons-391283) Calling .PreCreateCheck
	I0313 23:31:01.025394   13619 main.go:141] libmachine: (addons-391283) Calling .GetConfigRaw
	I0313 23:31:01.025821   13619 main.go:141] libmachine: Creating machine...
	I0313 23:31:01.025838   13619 main.go:141] libmachine: (addons-391283) Calling .Create
	I0313 23:31:01.025999   13619 main.go:141] libmachine: (addons-391283) Creating KVM machine...
	I0313 23:31:01.027289   13619 main.go:141] libmachine: (addons-391283) DBG | found existing default KVM network
	I0313 23:31:01.027974   13619 main.go:141] libmachine: (addons-391283) DBG | I0313 23:31:01.027827   13641 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0313 23:31:01.028003   13619 main.go:141] libmachine: (addons-391283) DBG | created network xml: 
	I0313 23:31:01.028017   13619 main.go:141] libmachine: (addons-391283) DBG | <network>
	I0313 23:31:01.028026   13619 main.go:141] libmachine: (addons-391283) DBG |   <name>mk-addons-391283</name>
	I0313 23:31:01.028034   13619 main.go:141] libmachine: (addons-391283) DBG |   <dns enable='no'/>
	I0313 23:31:01.028042   13619 main.go:141] libmachine: (addons-391283) DBG |   
	I0313 23:31:01.028058   13619 main.go:141] libmachine: (addons-391283) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0313 23:31:01.028079   13619 main.go:141] libmachine: (addons-391283) DBG |     <dhcp>
	I0313 23:31:01.028122   13619 main.go:141] libmachine: (addons-391283) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0313 23:31:01.028148   13619 main.go:141] libmachine: (addons-391283) DBG |     </dhcp>
	I0313 23:31:01.028156   13619 main.go:141] libmachine: (addons-391283) DBG |   </ip>
	I0313 23:31:01.028161   13619 main.go:141] libmachine: (addons-391283) DBG |   
	I0313 23:31:01.028170   13619 main.go:141] libmachine: (addons-391283) DBG | </network>
	I0313 23:31:01.028175   13619 main.go:141] libmachine: (addons-391283) DBG | 
	I0313 23:31:01.033512   13619 main.go:141] libmachine: (addons-391283) DBG | trying to create private KVM network mk-addons-391283 192.168.39.0/24...
	I0313 23:31:01.095828   13619 main.go:141] libmachine: (addons-391283) DBG | private KVM network mk-addons-391283 192.168.39.0/24 created
	I0313 23:31:01.095848   13619 main.go:141] libmachine: (addons-391283) DBG | I0313 23:31:01.095786   13641 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18375-4922/.minikube
	I0313 23:31:01.095865   13619 main.go:141] libmachine: (addons-391283) Setting up store path in /home/jenkins/minikube-integration/18375-4922/.minikube/machines/addons-391283 ...
	I0313 23:31:01.095877   13619 main.go:141] libmachine: (addons-391283) Building disk image from file:///home/jenkins/minikube-integration/18375-4922/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso
	I0313 23:31:01.095942   13619 main.go:141] libmachine: (addons-391283) Downloading /home/jenkins/minikube-integration/18375-4922/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18375-4922/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso...
	I0313 23:31:01.321881   13619 main.go:141] libmachine: (addons-391283) DBG | I0313 23:31:01.321757   13641 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18375-4922/.minikube/machines/addons-391283/id_rsa...
	I0313 23:31:01.469012   13619 main.go:141] libmachine: (addons-391283) DBG | I0313 23:31:01.468884   13641 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18375-4922/.minikube/machines/addons-391283/addons-391283.rawdisk...
	I0313 23:31:01.469049   13619 main.go:141] libmachine: (addons-391283) DBG | Writing magic tar header
	I0313 23:31:01.469065   13619 main.go:141] libmachine: (addons-391283) DBG | Writing SSH key tar header
	I0313 23:31:01.469080   13619 main.go:141] libmachine: (addons-391283) DBG | I0313 23:31:01.468997   13641 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18375-4922/.minikube/machines/addons-391283 ...
	I0313 23:31:01.469095   13619 main.go:141] libmachine: (addons-391283) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4922/.minikube/machines/addons-391283
	I0313 23:31:01.469110   13619 main.go:141] libmachine: (addons-391283) Setting executable bit set on /home/jenkins/minikube-integration/18375-4922/.minikube/machines/addons-391283 (perms=drwx------)
	I0313 23:31:01.469128   13619 main.go:141] libmachine: (addons-391283) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4922/.minikube/machines
	I0313 23:31:01.469143   13619 main.go:141] libmachine: (addons-391283) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4922/.minikube
	I0313 23:31:01.469156   13619 main.go:141] libmachine: (addons-391283) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18375-4922
	I0313 23:31:01.469166   13619 main.go:141] libmachine: (addons-391283) Setting executable bit set on /home/jenkins/minikube-integration/18375-4922/.minikube/machines (perms=drwxr-xr-x)
	I0313 23:31:01.469176   13619 main.go:141] libmachine: (addons-391283) Setting executable bit set on /home/jenkins/minikube-integration/18375-4922/.minikube (perms=drwxr-xr-x)
	I0313 23:31:01.469185   13619 main.go:141] libmachine: (addons-391283) Setting executable bit set on /home/jenkins/minikube-integration/18375-4922 (perms=drwxrwxr-x)
	I0313 23:31:01.469195   13619 main.go:141] libmachine: (addons-391283) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0313 23:31:01.469203   13619 main.go:141] libmachine: (addons-391283) DBG | Checking permissions on dir: /home/jenkins
	I0313 23:31:01.469212   13619 main.go:141] libmachine: (addons-391283) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0313 23:31:01.469229   13619 main.go:141] libmachine: (addons-391283) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0313 23:31:01.469255   13619 main.go:141] libmachine: (addons-391283) DBG | Checking permissions on dir: /home
	I0313 23:31:01.469260   13619 main.go:141] libmachine: (addons-391283) Creating domain...
	I0313 23:31:01.469268   13619 main.go:141] libmachine: (addons-391283) DBG | Skipping /home - not owner
	I0313 23:31:01.470231   13619 main.go:141] libmachine: (addons-391283) define libvirt domain using xml: 
	I0313 23:31:01.470260   13619 main.go:141] libmachine: (addons-391283) <domain type='kvm'>
	I0313 23:31:01.470267   13619 main.go:141] libmachine: (addons-391283)   <name>addons-391283</name>
	I0313 23:31:01.470273   13619 main.go:141] libmachine: (addons-391283)   <memory unit='MiB'>4000</memory>
	I0313 23:31:01.470280   13619 main.go:141] libmachine: (addons-391283)   <vcpu>2</vcpu>
	I0313 23:31:01.470288   13619 main.go:141] libmachine: (addons-391283)   <features>
	I0313 23:31:01.470296   13619 main.go:141] libmachine: (addons-391283)     <acpi/>
	I0313 23:31:01.470306   13619 main.go:141] libmachine: (addons-391283)     <apic/>
	I0313 23:31:01.470319   13619 main.go:141] libmachine: (addons-391283)     <pae/>
	I0313 23:31:01.470329   13619 main.go:141] libmachine: (addons-391283)     
	I0313 23:31:01.470337   13619 main.go:141] libmachine: (addons-391283)   </features>
	I0313 23:31:01.470344   13619 main.go:141] libmachine: (addons-391283)   <cpu mode='host-passthrough'>
	I0313 23:31:01.470349   13619 main.go:141] libmachine: (addons-391283)   
	I0313 23:31:01.470360   13619 main.go:141] libmachine: (addons-391283)   </cpu>
	I0313 23:31:01.470366   13619 main.go:141] libmachine: (addons-391283)   <os>
	I0313 23:31:01.470374   13619 main.go:141] libmachine: (addons-391283)     <type>hvm</type>
	I0313 23:31:01.470405   13619 main.go:141] libmachine: (addons-391283)     <boot dev='cdrom'/>
	I0313 23:31:01.470422   13619 main.go:141] libmachine: (addons-391283)     <boot dev='hd'/>
	I0313 23:31:01.470448   13619 main.go:141] libmachine: (addons-391283)     <bootmenu enable='no'/>
	I0313 23:31:01.470465   13619 main.go:141] libmachine: (addons-391283)   </os>
	I0313 23:31:01.470472   13619 main.go:141] libmachine: (addons-391283)   <devices>
	I0313 23:31:01.470480   13619 main.go:141] libmachine: (addons-391283)     <disk type='file' device='cdrom'>
	I0313 23:31:01.470490   13619 main.go:141] libmachine: (addons-391283)       <source file='/home/jenkins/minikube-integration/18375-4922/.minikube/machines/addons-391283/boot2docker.iso'/>
	I0313 23:31:01.470498   13619 main.go:141] libmachine: (addons-391283)       <target dev='hdc' bus='scsi'/>
	I0313 23:31:01.470504   13619 main.go:141] libmachine: (addons-391283)       <readonly/>
	I0313 23:31:01.470511   13619 main.go:141] libmachine: (addons-391283)     </disk>
	I0313 23:31:01.470517   13619 main.go:141] libmachine: (addons-391283)     <disk type='file' device='disk'>
	I0313 23:31:01.470526   13619 main.go:141] libmachine: (addons-391283)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0313 23:31:01.470533   13619 main.go:141] libmachine: (addons-391283)       <source file='/home/jenkins/minikube-integration/18375-4922/.minikube/machines/addons-391283/addons-391283.rawdisk'/>
	I0313 23:31:01.470541   13619 main.go:141] libmachine: (addons-391283)       <target dev='hda' bus='virtio'/>
	I0313 23:31:01.470552   13619 main.go:141] libmachine: (addons-391283)     </disk>
	I0313 23:31:01.470562   13619 main.go:141] libmachine: (addons-391283)     <interface type='network'>
	I0313 23:31:01.470575   13619 main.go:141] libmachine: (addons-391283)       <source network='mk-addons-391283'/>
	I0313 23:31:01.470587   13619 main.go:141] libmachine: (addons-391283)       <model type='virtio'/>
	I0313 23:31:01.470596   13619 main.go:141] libmachine: (addons-391283)     </interface>
	I0313 23:31:01.470606   13619 main.go:141] libmachine: (addons-391283)     <interface type='network'>
	I0313 23:31:01.470615   13619 main.go:141] libmachine: (addons-391283)       <source network='default'/>
	I0313 23:31:01.470627   13619 main.go:141] libmachine: (addons-391283)       <model type='virtio'/>
	I0313 23:31:01.470636   13619 main.go:141] libmachine: (addons-391283)     </interface>
	I0313 23:31:01.470649   13619 main.go:141] libmachine: (addons-391283)     <serial type='pty'>
	I0313 23:31:01.470660   13619 main.go:141] libmachine: (addons-391283)       <target port='0'/>
	I0313 23:31:01.470668   13619 main.go:141] libmachine: (addons-391283)     </serial>
	I0313 23:31:01.470681   13619 main.go:141] libmachine: (addons-391283)     <console type='pty'>
	I0313 23:31:01.470696   13619 main.go:141] libmachine: (addons-391283)       <target type='serial' port='0'/>
	I0313 23:31:01.470708   13619 main.go:141] libmachine: (addons-391283)     </console>
	I0313 23:31:01.470719   13619 main.go:141] libmachine: (addons-391283)     <rng model='virtio'>
	I0313 23:31:01.470732   13619 main.go:141] libmachine: (addons-391283)       <backend model='random'>/dev/random</backend>
	I0313 23:31:01.470741   13619 main.go:141] libmachine: (addons-391283)     </rng>
	I0313 23:31:01.470748   13619 main.go:141] libmachine: (addons-391283)     
	I0313 23:31:01.470758   13619 main.go:141] libmachine: (addons-391283)     
	I0313 23:31:01.470766   13619 main.go:141] libmachine: (addons-391283)   </devices>
	I0313 23:31:01.470777   13619 main.go:141] libmachine: (addons-391283) </domain>
	I0313 23:31:01.470796   13619 main.go:141] libmachine: (addons-391283) 
	I0313 23:31:01.477499   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:20:9e:5e in network default
	I0313 23:31:01.477999   13619 main.go:141] libmachine: (addons-391283) Ensuring networks are active...
	I0313 23:31:01.478014   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:01.478580   13619 main.go:141] libmachine: (addons-391283) Ensuring network default is active
	I0313 23:31:01.478868   13619 main.go:141] libmachine: (addons-391283) Ensuring network mk-addons-391283 is active
	I0313 23:31:01.480023   13619 main.go:141] libmachine: (addons-391283) Getting domain xml...
	I0313 23:31:01.480590   13619 main.go:141] libmachine: (addons-391283) Creating domain...
	I0313 23:31:02.818039   13619 main.go:141] libmachine: (addons-391283) Waiting to get IP...
	I0313 23:31:02.818930   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:02.819418   13619 main.go:141] libmachine: (addons-391283) DBG | unable to find current IP address of domain addons-391283 in network mk-addons-391283
	I0313 23:31:02.819449   13619 main.go:141] libmachine: (addons-391283) DBG | I0313 23:31:02.819388   13641 retry.go:31] will retry after 269.983153ms: waiting for machine to come up
	I0313 23:31:03.091048   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:03.092230   13619 main.go:141] libmachine: (addons-391283) DBG | unable to find current IP address of domain addons-391283 in network mk-addons-391283
	I0313 23:31:03.092258   13619 main.go:141] libmachine: (addons-391283) DBG | I0313 23:31:03.092182   13641 retry.go:31] will retry after 363.266255ms: waiting for machine to come up
	I0313 23:31:03.456717   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:03.457218   13619 main.go:141] libmachine: (addons-391283) DBG | unable to find current IP address of domain addons-391283 in network mk-addons-391283
	I0313 23:31:03.457248   13619 main.go:141] libmachine: (addons-391283) DBG | I0313 23:31:03.457185   13641 retry.go:31] will retry after 408.62654ms: waiting for machine to come up
	I0313 23:31:03.867676   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:03.868157   13619 main.go:141] libmachine: (addons-391283) DBG | unable to find current IP address of domain addons-391283 in network mk-addons-391283
	I0313 23:31:03.868183   13619 main.go:141] libmachine: (addons-391283) DBG | I0313 23:31:03.868123   13641 retry.go:31] will retry after 497.440436ms: waiting for machine to come up
	I0313 23:31:04.366731   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:04.367124   13619 main.go:141] libmachine: (addons-391283) DBG | unable to find current IP address of domain addons-391283 in network mk-addons-391283
	I0313 23:31:04.367160   13619 main.go:141] libmachine: (addons-391283) DBG | I0313 23:31:04.367091   13641 retry.go:31] will retry after 559.458649ms: waiting for machine to come up
	I0313 23:31:04.927864   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:04.928282   13619 main.go:141] libmachine: (addons-391283) DBG | unable to find current IP address of domain addons-391283 in network mk-addons-391283
	I0313 23:31:04.928312   13619 main.go:141] libmachine: (addons-391283) DBG | I0313 23:31:04.928227   13641 retry.go:31] will retry after 577.74382ms: waiting for machine to come up
	I0313 23:31:05.507546   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:05.507942   13619 main.go:141] libmachine: (addons-391283) DBG | unable to find current IP address of domain addons-391283 in network mk-addons-391283
	I0313 23:31:05.507969   13619 main.go:141] libmachine: (addons-391283) DBG | I0313 23:31:05.507913   13641 retry.go:31] will retry after 1.001767333s: waiting for machine to come up
	I0313 23:31:06.511133   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:06.511524   13619 main.go:141] libmachine: (addons-391283) DBG | unable to find current IP address of domain addons-391283 in network mk-addons-391283
	I0313 23:31:06.511554   13619 main.go:141] libmachine: (addons-391283) DBG | I0313 23:31:06.511486   13641 retry.go:31] will retry after 1.062073622s: waiting for machine to come up
	I0313 23:31:07.575680   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:07.576052   13619 main.go:141] libmachine: (addons-391283) DBG | unable to find current IP address of domain addons-391283 in network mk-addons-391283
	I0313 23:31:07.576081   13619 main.go:141] libmachine: (addons-391283) DBG | I0313 23:31:07.576005   13641 retry.go:31] will retry after 1.858866896s: waiting for machine to come up
	I0313 23:31:09.436217   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:09.436582   13619 main.go:141] libmachine: (addons-391283) DBG | unable to find current IP address of domain addons-391283 in network mk-addons-391283
	I0313 23:31:09.436604   13619 main.go:141] libmachine: (addons-391283) DBG | I0313 23:31:09.436548   13641 retry.go:31] will retry after 2.041278828s: waiting for machine to come up
	I0313 23:31:11.479305   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:11.479725   13619 main.go:141] libmachine: (addons-391283) DBG | unable to find current IP address of domain addons-391283 in network mk-addons-391283
	I0313 23:31:11.479754   13619 main.go:141] libmachine: (addons-391283) DBG | I0313 23:31:11.479678   13641 retry.go:31] will retry after 2.014821285s: waiting for machine to come up
	I0313 23:31:13.496632   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:13.497067   13619 main.go:141] libmachine: (addons-391283) DBG | unable to find current IP address of domain addons-391283 in network mk-addons-391283
	I0313 23:31:13.497098   13619 main.go:141] libmachine: (addons-391283) DBG | I0313 23:31:13.497012   13641 retry.go:31] will retry after 2.631445032s: waiting for machine to come up
	I0313 23:31:16.130398   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:16.130847   13619 main.go:141] libmachine: (addons-391283) DBG | unable to find current IP address of domain addons-391283 in network mk-addons-391283
	I0313 23:31:16.130875   13619 main.go:141] libmachine: (addons-391283) DBG | I0313 23:31:16.130800   13641 retry.go:31] will retry after 3.05851832s: waiting for machine to come up
	I0313 23:31:19.192853   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:19.193183   13619 main.go:141] libmachine: (addons-391283) DBG | unable to find current IP address of domain addons-391283 in network mk-addons-391283
	I0313 23:31:19.193202   13619 main.go:141] libmachine: (addons-391283) DBG | I0313 23:31:19.193139   13641 retry.go:31] will retry after 3.885624299s: waiting for machine to come up
	I0313 23:31:23.080721   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:23.081108   13619 main.go:141] libmachine: (addons-391283) Found IP for machine: 192.168.39.216
	I0313 23:31:23.081133   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has current primary IP address 192.168.39.216 and MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:23.081143   13619 main.go:141] libmachine: (addons-391283) Reserving static IP address...
	I0313 23:31:23.081455   13619 main.go:141] libmachine: (addons-391283) DBG | unable to find host DHCP lease matching {name: "addons-391283", mac: "52:54:00:dc:6b:c3", ip: "192.168.39.216"} in network mk-addons-391283
	I0313 23:31:23.150069   13619 main.go:141] libmachine: (addons-391283) Reserved static IP address: 192.168.39.216
	I0313 23:31:23.150102   13619 main.go:141] libmachine: (addons-391283) Waiting for SSH to be available...
	I0313 23:31:23.150113   13619 main.go:141] libmachine: (addons-391283) DBG | Getting to WaitForSSH function...
	I0313 23:31:23.152037   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:23.152336   13619 main.go:141] libmachine: (addons-391283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:6b:c3", ip: ""} in network mk-addons-391283: {Iface:virbr1 ExpiryTime:2024-03-14 00:31:16 +0000 UTC Type:0 Mac:52:54:00:dc:6b:c3 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:minikube Clientid:01:52:54:00:dc:6b:c3}
	I0313 23:31:23.152369   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined IP address 192.168.39.216 and MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:23.152449   13619 main.go:141] libmachine: (addons-391283) DBG | Using SSH client type: external
	I0313 23:31:23.152491   13619 main.go:141] libmachine: (addons-391283) DBG | Using SSH private key: /home/jenkins/minikube-integration/18375-4922/.minikube/machines/addons-391283/id_rsa (-rw-------)
	I0313 23:31:23.152539   13619 main.go:141] libmachine: (addons-391283) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.216 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18375-4922/.minikube/machines/addons-391283/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0313 23:31:23.152558   13619 main.go:141] libmachine: (addons-391283) DBG | About to run SSH command:
	I0313 23:31:23.152569   13619 main.go:141] libmachine: (addons-391283) DBG | exit 0
	I0313 23:31:23.282775   13619 main.go:141] libmachine: (addons-391283) DBG | SSH cmd err, output: <nil>: 
	I0313 23:31:23.283051   13619 main.go:141] libmachine: (addons-391283) KVM machine creation complete!
	I0313 23:31:23.283387   13619 main.go:141] libmachine: (addons-391283) Calling .GetConfigRaw
	I0313 23:31:23.283858   13619 main.go:141] libmachine: (addons-391283) Calling .DriverName
	I0313 23:31:23.284063   13619 main.go:141] libmachine: (addons-391283) Calling .DriverName
	I0313 23:31:23.284247   13619 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0313 23:31:23.284262   13619 main.go:141] libmachine: (addons-391283) Calling .GetState
	I0313 23:31:23.285171   13619 main.go:141] libmachine: Detecting operating system of created instance...
	I0313 23:31:23.285194   13619 main.go:141] libmachine: Waiting for SSH to be available...
	I0313 23:31:23.285202   13619 main.go:141] libmachine: Getting to WaitForSSH function...
	I0313 23:31:23.285210   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHHostname
	I0313 23:31:23.287246   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:23.287588   13619 main.go:141] libmachine: (addons-391283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:6b:c3", ip: ""} in network mk-addons-391283: {Iface:virbr1 ExpiryTime:2024-03-14 00:31:16 +0000 UTC Type:0 Mac:52:54:00:dc:6b:c3 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-391283 Clientid:01:52:54:00:dc:6b:c3}
	I0313 23:31:23.287615   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined IP address 192.168.39.216 and MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:23.287718   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHPort
	I0313 23:31:23.287889   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHKeyPath
	I0313 23:31:23.288053   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHKeyPath
	I0313 23:31:23.288203   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHUsername
	I0313 23:31:23.288380   13619 main.go:141] libmachine: Using SSH client type: native
	I0313 23:31:23.288552   13619 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I0313 23:31:23.288563   13619 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0313 23:31:23.394069   13619 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0313 23:31:23.394096   13619 main.go:141] libmachine: Detecting the provisioner...
	I0313 23:31:23.394103   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHHostname
	I0313 23:31:23.396432   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:23.396728   13619 main.go:141] libmachine: (addons-391283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:6b:c3", ip: ""} in network mk-addons-391283: {Iface:virbr1 ExpiryTime:2024-03-14 00:31:16 +0000 UTC Type:0 Mac:52:54:00:dc:6b:c3 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-391283 Clientid:01:52:54:00:dc:6b:c3}
	I0313 23:31:23.396762   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined IP address 192.168.39.216 and MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:23.396873   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHPort
	I0313 23:31:23.397047   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHKeyPath
	I0313 23:31:23.397202   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHKeyPath
	I0313 23:31:23.397305   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHUsername
	I0313 23:31:23.397454   13619 main.go:141] libmachine: Using SSH client type: native
	I0313 23:31:23.397607   13619 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I0313 23:31:23.397619   13619 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0313 23:31:23.504280   13619 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0313 23:31:23.504361   13619 main.go:141] libmachine: found compatible host: buildroot
	I0313 23:31:23.504372   13619 main.go:141] libmachine: Provisioning with buildroot...
	I0313 23:31:23.504380   13619 main.go:141] libmachine: (addons-391283) Calling .GetMachineName
	I0313 23:31:23.504611   13619 buildroot.go:166] provisioning hostname "addons-391283"
	I0313 23:31:23.504634   13619 main.go:141] libmachine: (addons-391283) Calling .GetMachineName
	I0313 23:31:23.504850   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHHostname
	I0313 23:31:23.507374   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:23.507697   13619 main.go:141] libmachine: (addons-391283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:6b:c3", ip: ""} in network mk-addons-391283: {Iface:virbr1 ExpiryTime:2024-03-14 00:31:16 +0000 UTC Type:0 Mac:52:54:00:dc:6b:c3 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-391283 Clientid:01:52:54:00:dc:6b:c3}
	I0313 23:31:23.507727   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined IP address 192.168.39.216 and MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:23.507863   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHPort
	I0313 23:31:23.508038   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHKeyPath
	I0313 23:31:23.508220   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHKeyPath
	I0313 23:31:23.508345   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHUsername
	I0313 23:31:23.508526   13619 main.go:141] libmachine: Using SSH client type: native
	I0313 23:31:23.508676   13619 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I0313 23:31:23.508687   13619 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-391283 && echo "addons-391283" | sudo tee /etc/hostname
	I0313 23:31:23.630516   13619 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-391283
	
	I0313 23:31:23.630539   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHHostname
	I0313 23:31:23.633167   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:23.633440   13619 main.go:141] libmachine: (addons-391283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:6b:c3", ip: ""} in network mk-addons-391283: {Iface:virbr1 ExpiryTime:2024-03-14 00:31:16 +0000 UTC Type:0 Mac:52:54:00:dc:6b:c3 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-391283 Clientid:01:52:54:00:dc:6b:c3}
	I0313 23:31:23.633465   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined IP address 192.168.39.216 and MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:23.633675   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHPort
	I0313 23:31:23.633869   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHKeyPath
	I0313 23:31:23.634040   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHKeyPath
	I0313 23:31:23.634178   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHUsername
	I0313 23:31:23.634345   13619 main.go:141] libmachine: Using SSH client type: native
	I0313 23:31:23.634527   13619 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I0313 23:31:23.634543   13619 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-391283' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-391283/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-391283' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0313 23:31:23.751856   13619 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0313 23:31:23.751880   13619 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18375-4922/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-4922/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-4922/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-4922/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-4922/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-4922/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-4922/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-4922/.minikube}
	I0313 23:31:23.751904   13619 buildroot.go:174] setting up certificates
	I0313 23:31:23.751914   13619 provision.go:84] configureAuth start
	I0313 23:31:23.751922   13619 main.go:141] libmachine: (addons-391283) Calling .GetMachineName
	I0313 23:31:23.752141   13619 main.go:141] libmachine: (addons-391283) Calling .GetIP
	I0313 23:31:23.754164   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:23.754557   13619 main.go:141] libmachine: (addons-391283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:6b:c3", ip: ""} in network mk-addons-391283: {Iface:virbr1 ExpiryTime:2024-03-14 00:31:16 +0000 UTC Type:0 Mac:52:54:00:dc:6b:c3 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-391283 Clientid:01:52:54:00:dc:6b:c3}
	I0313 23:31:23.754587   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined IP address 192.168.39.216 and MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:23.754720   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHHostname
	I0313 23:31:23.756567   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:23.756822   13619 main.go:141] libmachine: (addons-391283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:6b:c3", ip: ""} in network mk-addons-391283: {Iface:virbr1 ExpiryTime:2024-03-14 00:31:16 +0000 UTC Type:0 Mac:52:54:00:dc:6b:c3 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-391283 Clientid:01:52:54:00:dc:6b:c3}
	I0313 23:31:23.756841   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined IP address 192.168.39.216 and MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:23.756955   13619 provision.go:143] copyHostCerts
	I0313 23:31:23.757027   13619 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4922/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-4922/.minikube/ca.pem (1082 bytes)
	I0313 23:31:23.757126   13619 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4922/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-4922/.minikube/cert.pem (1123 bytes)
	I0313 23:31:23.757196   13619 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-4922/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-4922/.minikube/key.pem (1679 bytes)
	I0313 23:31:23.757273   13619 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-4922/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-4922/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-4922/.minikube/certs/ca-key.pem org=jenkins.addons-391283 san=[127.0.0.1 192.168.39.216 addons-391283 localhost minikube]
	I0313 23:31:23.901399   13619 provision.go:177] copyRemoteCerts
	I0313 23:31:23.901454   13619 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0313 23:31:23.901479   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHHostname
	I0313 23:31:23.903651   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:23.903964   13619 main.go:141] libmachine: (addons-391283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:6b:c3", ip: ""} in network mk-addons-391283: {Iface:virbr1 ExpiryTime:2024-03-14 00:31:16 +0000 UTC Type:0 Mac:52:54:00:dc:6b:c3 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-391283 Clientid:01:52:54:00:dc:6b:c3}
	I0313 23:31:23.903993   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined IP address 192.168.39.216 and MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:23.904117   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHPort
	I0313 23:31:23.904293   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHKeyPath
	I0313 23:31:23.904460   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHUsername
	I0313 23:31:23.904601   13619 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4922/.minikube/machines/addons-391283/id_rsa Username:docker}
	I0313 23:31:23.985096   13619 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4922/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0313 23:31:24.010672   13619 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4922/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0313 23:31:24.036100   13619 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4922/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0313 23:31:24.061470   13619 provision.go:87] duration metric: took 309.546756ms to configureAuth
	I0313 23:31:24.061492   13619 buildroot.go:189] setting minikube options for container-runtime
	I0313 23:31:24.061659   13619 config.go:182] Loaded profile config "addons-391283": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0313 23:31:24.061682   13619 main.go:141] libmachine: Checking connection to Docker...
	I0313 23:31:24.061694   13619 main.go:141] libmachine: (addons-391283) Calling .GetURL
	I0313 23:31:24.062764   13619 main.go:141] libmachine: (addons-391283) DBG | Using libvirt version 6000000
	I0313 23:31:24.064950   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:24.065263   13619 main.go:141] libmachine: (addons-391283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:6b:c3", ip: ""} in network mk-addons-391283: {Iface:virbr1 ExpiryTime:2024-03-14 00:31:16 +0000 UTC Type:0 Mac:52:54:00:dc:6b:c3 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-391283 Clientid:01:52:54:00:dc:6b:c3}
	I0313 23:31:24.065283   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined IP address 192.168.39.216 and MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:24.065437   13619 main.go:141] libmachine: Docker is up and running!
	I0313 23:31:24.065454   13619 main.go:141] libmachine: Reticulating splines...
	I0313 23:31:24.065463   13619 client.go:171] duration metric: took 23.666716385s to LocalClient.Create
	I0313 23:31:24.065485   13619 start.go:167] duration metric: took 23.66677414s to libmachine.API.Create "addons-391283"
	I0313 23:31:24.065499   13619 start.go:293] postStartSetup for "addons-391283" (driver="kvm2")
	I0313 23:31:24.065510   13619 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0313 23:31:24.065527   13619 main.go:141] libmachine: (addons-391283) Calling .DriverName
	I0313 23:31:24.065722   13619 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0313 23:31:24.065742   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHHostname
	I0313 23:31:24.067872   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:24.068166   13619 main.go:141] libmachine: (addons-391283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:6b:c3", ip: ""} in network mk-addons-391283: {Iface:virbr1 ExpiryTime:2024-03-14 00:31:16 +0000 UTC Type:0 Mac:52:54:00:dc:6b:c3 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-391283 Clientid:01:52:54:00:dc:6b:c3}
	I0313 23:31:24.068201   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined IP address 192.168.39.216 and MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:24.068355   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHPort
	I0313 23:31:24.068507   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHKeyPath
	I0313 23:31:24.068647   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHUsername
	I0313 23:31:24.068794   13619 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4922/.minikube/machines/addons-391283/id_rsa Username:docker}
	I0313 23:31:24.149197   13619 ssh_runner.go:195] Run: cat /etc/os-release
	I0313 23:31:24.153773   13619 info.go:137] Remote host: Buildroot 2023.02.9
	I0313 23:31:24.153791   13619 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4922/.minikube/addons for local assets ...
	I0313 23:31:24.153849   13619 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-4922/.minikube/files for local assets ...
	I0313 23:31:24.153871   13619 start.go:296] duration metric: took 88.36642ms for postStartSetup
	I0313 23:31:24.153897   13619 main.go:141] libmachine: (addons-391283) Calling .GetConfigRaw
	I0313 23:31:24.154368   13619 main.go:141] libmachine: (addons-391283) Calling .GetIP
	I0313 23:31:24.156688   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:24.157000   13619 main.go:141] libmachine: (addons-391283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:6b:c3", ip: ""} in network mk-addons-391283: {Iface:virbr1 ExpiryTime:2024-03-14 00:31:16 +0000 UTC Type:0 Mac:52:54:00:dc:6b:c3 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-391283 Clientid:01:52:54:00:dc:6b:c3}
	I0313 23:31:24.157050   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined IP address 192.168.39.216 and MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:24.157257   13619 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/config.json ...
	I0313 23:31:24.157406   13619 start.go:128] duration metric: took 23.774633428s to createHost
	I0313 23:31:24.157432   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHHostname
	I0313 23:31:24.159452   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:24.159742   13619 main.go:141] libmachine: (addons-391283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:6b:c3", ip: ""} in network mk-addons-391283: {Iface:virbr1 ExpiryTime:2024-03-14 00:31:16 +0000 UTC Type:0 Mac:52:54:00:dc:6b:c3 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-391283 Clientid:01:52:54:00:dc:6b:c3}
	I0313 23:31:24.159767   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined IP address 192.168.39.216 and MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:24.159907   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHPort
	I0313 23:31:24.160053   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHKeyPath
	I0313 23:31:24.160205   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHKeyPath
	I0313 23:31:24.160322   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHUsername
	I0313 23:31:24.160484   13619 main.go:141] libmachine: Using SSH client type: native
	I0313 23:31:24.160646   13619 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I0313 23:31:24.160657   13619 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0313 23:31:24.267537   13619 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710372684.234984380
	
	I0313 23:31:24.267557   13619 fix.go:216] guest clock: 1710372684.234984380
	I0313 23:31:24.267568   13619 fix.go:229] Guest: 2024-03-13 23:31:24.23498438 +0000 UTC Remote: 2024-03-13 23:31:24.157419422 +0000 UTC m=+23.883244866 (delta=77.564958ms)
	I0313 23:31:24.267618   13619 fix.go:200] guest clock delta is within tolerance: 77.564958ms
	I0313 23:31:24.267627   13619 start.go:83] releasing machines lock for "addons-391283", held for 23.884942505s
	I0313 23:31:24.267654   13619 main.go:141] libmachine: (addons-391283) Calling .DriverName
	I0313 23:31:24.267871   13619 main.go:141] libmachine: (addons-391283) Calling .GetIP
	I0313 23:31:24.270048   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:24.270391   13619 main.go:141] libmachine: (addons-391283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:6b:c3", ip: ""} in network mk-addons-391283: {Iface:virbr1 ExpiryTime:2024-03-14 00:31:16 +0000 UTC Type:0 Mac:52:54:00:dc:6b:c3 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-391283 Clientid:01:52:54:00:dc:6b:c3}
	I0313 23:31:24.270415   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined IP address 192.168.39.216 and MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:24.270511   13619 main.go:141] libmachine: (addons-391283) Calling .DriverName
	I0313 23:31:24.270963   13619 main.go:141] libmachine: (addons-391283) Calling .DriverName
	I0313 23:31:24.271107   13619 main.go:141] libmachine: (addons-391283) Calling .DriverName
	I0313 23:31:24.271186   13619 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0313 23:31:24.271240   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHHostname
	I0313 23:31:24.271260   13619 ssh_runner.go:195] Run: cat /version.json
	I0313 23:31:24.271277   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHHostname
	I0313 23:31:24.273536   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:24.273622   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:24.273857   13619 main.go:141] libmachine: (addons-391283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:6b:c3", ip: ""} in network mk-addons-391283: {Iface:virbr1 ExpiryTime:2024-03-14 00:31:16 +0000 UTC Type:0 Mac:52:54:00:dc:6b:c3 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-391283 Clientid:01:52:54:00:dc:6b:c3}
	I0313 23:31:24.273894   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined IP address 192.168.39.216 and MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:24.273986   13619 main.go:141] libmachine: (addons-391283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:6b:c3", ip: ""} in network mk-addons-391283: {Iface:virbr1 ExpiryTime:2024-03-14 00:31:16 +0000 UTC Type:0 Mac:52:54:00:dc:6b:c3 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-391283 Clientid:01:52:54:00:dc:6b:c3}
	I0313 23:31:24.274011   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined IP address 192.168.39.216 and MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:24.274024   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHPort
	I0313 23:31:24.274197   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHKeyPath
	I0313 23:31:24.274206   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHPort
	I0313 23:31:24.274374   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHUsername
	I0313 23:31:24.274376   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHKeyPath
	I0313 23:31:24.274528   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHUsername
	I0313 23:31:24.274545   13619 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4922/.minikube/machines/addons-391283/id_rsa Username:docker}
	I0313 23:31:24.274670   13619 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4922/.minikube/machines/addons-391283/id_rsa Username:docker}
	I0313 23:31:24.437976   13619 ssh_runner.go:195] Run: systemctl --version
	I0313 23:31:24.444748   13619 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0313 23:31:24.451442   13619 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0313 23:31:24.451500   13619 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0313 23:31:24.473154   13619 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0313 23:31:24.473169   13619 start.go:494] detecting cgroup driver to use...
	I0313 23:31:24.473219   13619 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0313 23:31:24.514616   13619 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0313 23:31:24.527993   13619 docker.go:217] disabling cri-docker service (if available) ...
	I0313 23:31:24.528039   13619 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0313 23:31:24.541112   13619 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0313 23:31:24.554486   13619 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0313 23:31:24.666171   13619 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0313 23:31:24.798128   13619 docker.go:233] disabling docker service ...
	I0313 23:31:24.798200   13619 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0313 23:31:24.813493   13619 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0313 23:31:24.826053   13619 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0313 23:31:24.958754   13619 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0313 23:31:25.082213   13619 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0313 23:31:25.097104   13619 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0313 23:31:25.116560   13619 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0313 23:31:25.127685   13619 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0313 23:31:25.138416   13619 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0313 23:31:25.138460   13619 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0313 23:31:25.149526   13619 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0313 23:31:25.160428   13619 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0313 23:31:25.171391   13619 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0313 23:31:25.182132   13619 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0313 23:31:25.193235   13619 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0313 23:31:25.203988   13619 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0313 23:31:25.213795   13619 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0313 23:31:25.213845   13619 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0313 23:31:25.228036   13619 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0313 23:31:25.237486   13619 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0313 23:31:25.348393   13619 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0313 23:31:25.378086   13619 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0313 23:31:25.378151   13619 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0313 23:31:25.382968   13619 retry.go:31] will retry after 632.154444ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0313 23:31:26.016103   13619 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0313 23:31:26.021777   13619 start.go:562] Will wait 60s for crictl version
	I0313 23:31:26.021838   13619 ssh_runner.go:195] Run: which crictl
	I0313 23:31:26.026366   13619 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0313 23:31:26.065523   13619 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.14
	RuntimeApiVersion:  v1
	I0313 23:31:26.065605   13619 ssh_runner.go:195] Run: containerd --version
	I0313 23:31:26.094682   13619 ssh_runner.go:195] Run: containerd --version
	I0313 23:31:26.122251   13619 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.7.14 ...
	I0313 23:31:26.123623   13619 main.go:141] libmachine: (addons-391283) Calling .GetIP
	I0313 23:31:26.126189   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:26.126531   13619 main.go:141] libmachine: (addons-391283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:6b:c3", ip: ""} in network mk-addons-391283: {Iface:virbr1 ExpiryTime:2024-03-14 00:31:16 +0000 UTC Type:0 Mac:52:54:00:dc:6b:c3 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-391283 Clientid:01:52:54:00:dc:6b:c3}
	I0313 23:31:26.126551   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined IP address 192.168.39.216 and MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:26.126735   13619 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0313 23:31:26.131156   13619 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0313 23:31:26.144757   13619 kubeadm.go:877] updating cluster {Name:addons-391283 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:addons-391283 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.216 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0313 23:31:26.144845   13619 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0313 23:31:26.144909   13619 ssh_runner.go:195] Run: sudo crictl images --output json
	I0313 23:31:26.186373   13619 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0313 23:31:26.186429   13619 ssh_runner.go:195] Run: which lz4
	I0313 23:31:26.190781   13619 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0313 23:31:26.195393   13619 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0313 23:31:26.195412   13619 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (457457495 bytes)
	I0313 23:31:27.898577   13619 containerd.go:548] duration metric: took 1.707818422s to copy over tarball
	I0313 23:31:27.898634   13619 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0313 23:31:30.647708   13619 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.749051491s)
	I0313 23:31:30.647734   13619 containerd.go:555] duration metric: took 2.749138127s to extract the tarball
	I0313 23:31:30.647740   13619 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0313 23:31:30.689888   13619 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0313 23:31:30.805569   13619 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0313 23:31:30.831496   13619 ssh_runner.go:195] Run: sudo crictl images --output json
	I0313 23:31:30.875926   13619 retry.go:31] will retry after 173.843351ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-03-13T23:31:30Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0313 23:31:31.050361   13619 ssh_runner.go:195] Run: sudo crictl images --output json
	I0313 23:31:31.088166   13619 containerd.go:612] all images are preloaded for containerd runtime.
	I0313 23:31:31.088189   13619 cache_images.go:84] Images are preloaded, skipping loading
	I0313 23:31:31.088198   13619 kubeadm.go:928] updating node { 192.168.39.216 8443 v1.28.4 containerd true true} ...
	I0313 23:31:31.088318   13619 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-391283 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.216
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-391283 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0313 23:31:31.088399   13619 ssh_runner.go:195] Run: sudo crictl info
	I0313 23:31:31.123887   13619 cni.go:84] Creating CNI manager for ""
	I0313 23:31:31.123906   13619 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0313 23:31:31.123917   13619 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0313 23:31:31.123934   13619 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.216 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-391283 NodeName:addons-391283 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.216"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.216 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0313 23:31:31.124057   13619 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.216
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-391283"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.216
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.216"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0313 23:31:31.124112   13619 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0313 23:31:31.137966   13619 binaries.go:44] Found k8s binaries, skipping transfer
	I0313 23:31:31.138061   13619 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0313 23:31:31.149145   13619 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0313 23:31:31.167584   13619 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0313 23:31:31.185907   13619 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2173 bytes)
	I0313 23:31:31.204292   13619 ssh_runner.go:195] Run: grep 192.168.39.216	control-plane.minikube.internal$ /etc/hosts
	I0313 23:31:31.208665   13619 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.216	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0313 23:31:31.222998   13619 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0313 23:31:31.344078   13619 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0313 23:31:31.366296   13619 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283 for IP: 192.168.39.216
	I0313 23:31:31.366322   13619 certs.go:194] generating shared ca certs ...
	I0313 23:31:31.366347   13619 certs.go:226] acquiring lock for ca certs: {Name:mkaf260582cedb19f0c1c73c21ae5782449641ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:31:31.366493   13619 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-4922/.minikube/ca.key
	I0313 23:31:31.518706   13619 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-4922/.minikube/ca.crt ...
	I0313 23:31:31.518738   13619 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4922/.minikube/ca.crt: {Name:mk4f1427c5744667635e30a527a8eedc356f8a02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:31:31.518884   13619 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-4922/.minikube/ca.key ...
	I0313 23:31:31.518895   13619 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4922/.minikube/ca.key: {Name:mk8f94c59dab0ab1080c77ab0cde7787ed14f2c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:31:31.518962   13619 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-4922/.minikube/proxy-client-ca.key
	I0313 23:31:31.725556   13619 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-4922/.minikube/proxy-client-ca.crt ...
	I0313 23:31:31.725582   13619 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4922/.minikube/proxy-client-ca.crt: {Name:mk86fe2cacc62d83fe9457d5b9940c5d9a3daf36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:31:31.725743   13619 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-4922/.minikube/proxy-client-ca.key ...
	I0313 23:31:31.725756   13619 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4922/.minikube/proxy-client-ca.key: {Name:mk86780525a37730d4909fc882cc2d6a44f436d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:31:31.725841   13619 certs.go:256] generating profile certs ...
	I0313 23:31:31.725906   13619 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.key
	I0313 23:31:31.725928   13619 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt with IP's: []
	I0313 23:31:31.883811   13619 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt ...
	I0313 23:31:31.883834   13619 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt: {Name:mk2d49c8202deef59fc4bd07f6c31c874f64a5e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:31:31.883986   13619 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.key ...
	I0313 23:31:31.883998   13619 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.key: {Name:mkf73da9a52c503168f7d66162eb493ef60da6d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:31:31.884066   13619 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/apiserver.key.e9651e92
	I0313 23:31:31.884083   13619 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/apiserver.crt.e9651e92 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.216]
	I0313 23:31:32.163341   13619 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/apiserver.crt.e9651e92 ...
	I0313 23:31:32.163369   13619 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/apiserver.crt.e9651e92: {Name:mkf9b0f5c59df2014d137fce291af5c7c07ccf10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:31:32.163509   13619 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/apiserver.key.e9651e92 ...
	I0313 23:31:32.163521   13619 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/apiserver.key.e9651e92: {Name:mk205ae67c9cd51599a07c07b9cefe63fd2b9196 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:31:32.163589   13619 certs.go:381] copying /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/apiserver.crt.e9651e92 -> /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/apiserver.crt
	I0313 23:31:32.163675   13619 certs.go:385] copying /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/apiserver.key.e9651e92 -> /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/apiserver.key
	I0313 23:31:32.163718   13619 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/proxy-client.key
	I0313 23:31:32.163734   13619 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/proxy-client.crt with IP's: []
	I0313 23:31:32.359395   13619 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/proxy-client.crt ...
	I0313 23:31:32.359421   13619 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/proxy-client.crt: {Name:mk94a5ef6dfd8427de9c4dd07df53d9ec8c3e676 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:31:32.359569   13619 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/proxy-client.key ...
	I0313 23:31:32.359580   13619 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/proxy-client.key: {Name:mk7ecdc766e75f28c33118ed8d53159d22bee7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:31:32.359739   13619 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4922/.minikube/certs/ca-key.pem (1675 bytes)
	I0313 23:31:32.359770   13619 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4922/.minikube/certs/ca.pem (1082 bytes)
	I0313 23:31:32.359793   13619 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4922/.minikube/certs/cert.pem (1123 bytes)
	I0313 23:31:32.359815   13619 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-4922/.minikube/certs/key.pem (1679 bytes)
	I0313 23:31:32.360360   13619 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4922/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0313 23:31:32.387111   13619 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4922/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0313 23:31:32.412755   13619 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4922/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0313 23:31:32.438139   13619 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4922/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0313 23:31:32.463934   13619 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0313 23:31:32.489418   13619 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0313 23:31:32.514587   13619 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0313 23:31:32.539392   13619 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0313 23:31:32.564154   13619 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-4922/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0313 23:31:32.588333   13619 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0313 23:31:32.605712   13619 ssh_runner.go:195] Run: openssl version
	I0313 23:31:32.611658   13619 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0313 23:31:32.623812   13619 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:31:32.628503   13619 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 13 23:31 /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:31:32.628545   13619 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0313 23:31:32.634403   13619 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0313 23:31:32.646960   13619 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0313 23:31:32.651433   13619 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0313 23:31:32.651475   13619 kubeadm.go:391] StartCluster: {Name:addons-391283 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 C
lusterName:addons-391283 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.216 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0313 23:31:32.651538   13619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0313 23:31:32.651575   13619 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0313 23:31:32.694773   13619 cri.go:89] found id: ""
	I0313 23:31:32.694835   13619 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0313 23:31:32.706650   13619 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0313 23:31:32.717548   13619 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0313 23:31:32.728354   13619 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0313 23:31:32.728369   13619 kubeadm.go:156] found existing configuration files:
	
	I0313 23:31:32.728401   13619 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0313 23:31:32.740872   13619 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0313 23:31:32.740930   13619 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0313 23:31:32.752641   13619 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0313 23:31:32.764760   13619 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0313 23:31:32.764822   13619 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0313 23:31:32.777783   13619 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0313 23:31:32.795079   13619 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0313 23:31:32.795146   13619 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0313 23:31:32.808482   13619 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0313 23:31:32.827517   13619 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0313 23:31:32.827600   13619 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0313 23:31:32.838713   13619 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0313 23:31:33.028139   13619 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0313 23:31:43.118410   13619 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0313 23:31:43.118503   13619 kubeadm.go:309] [preflight] Running pre-flight checks
	I0313 23:31:43.118602   13619 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0313 23:31:43.118724   13619 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0313 23:31:43.118867   13619 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0313 23:31:43.118962   13619 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0313 23:31:43.120503   13619 out.go:204]   - Generating certificates and keys ...
	I0313 23:31:43.120567   13619 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0313 23:31:43.120616   13619 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0313 23:31:43.120711   13619 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0313 23:31:43.120804   13619 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0313 23:31:43.120883   13619 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0313 23:31:43.120954   13619 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0313 23:31:43.121032   13619 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0313 23:31:43.121207   13619 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-391283 localhost] and IPs [192.168.39.216 127.0.0.1 ::1]
	I0313 23:31:43.121293   13619 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0313 23:31:43.121469   13619 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-391283 localhost] and IPs [192.168.39.216 127.0.0.1 ::1]
	I0313 23:31:43.121561   13619 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0313 23:31:43.121643   13619 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0313 23:31:43.121698   13619 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0313 23:31:43.121787   13619 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0313 23:31:43.121872   13619 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0313 23:31:43.121938   13619 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0313 23:31:43.122039   13619 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0313 23:31:43.122133   13619 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0313 23:31:43.122242   13619 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0313 23:31:43.122324   13619 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0313 23:31:43.123761   13619 out.go:204]   - Booting up control plane ...
	I0313 23:31:43.123845   13619 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0313 23:31:43.123922   13619 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0313 23:31:43.124016   13619 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0313 23:31:43.124157   13619 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0313 23:31:43.124268   13619 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0313 23:31:43.124323   13619 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0313 23:31:43.124519   13619 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0313 23:31:43.124613   13619 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.006031 seconds
	I0313 23:31:43.124742   13619 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0313 23:31:43.124887   13619 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0313 23:31:43.124977   13619 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0313 23:31:43.125212   13619 kubeadm.go:309] [mark-control-plane] Marking the node addons-391283 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0313 23:31:43.125283   13619 kubeadm.go:309] [bootstrap-token] Using token: 0ab358.4sniocqn46g4w4je
	I0313 23:31:43.126829   13619 out.go:204]   - Configuring RBAC rules ...
	I0313 23:31:43.126940   13619 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0313 23:31:43.127052   13619 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0313 23:31:43.127236   13619 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0313 23:31:43.127397   13619 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0313 23:31:43.127537   13619 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0313 23:31:43.127642   13619 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0313 23:31:43.127776   13619 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0313 23:31:43.127838   13619 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0313 23:31:43.127901   13619 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0313 23:31:43.127910   13619 kubeadm.go:309] 
	I0313 23:31:43.127978   13619 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0313 23:31:43.127988   13619 kubeadm.go:309] 
	I0313 23:31:43.128080   13619 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0313 23:31:43.128091   13619 kubeadm.go:309] 
	I0313 23:31:43.128134   13619 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0313 23:31:43.128214   13619 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0313 23:31:43.128286   13619 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0313 23:31:43.128296   13619 kubeadm.go:309] 
	I0313 23:31:43.128373   13619 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0313 23:31:43.128389   13619 kubeadm.go:309] 
	I0313 23:31:43.128473   13619 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0313 23:31:43.128491   13619 kubeadm.go:309] 
	I0313 23:31:43.128561   13619 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0313 23:31:43.128657   13619 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0313 23:31:43.128756   13619 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0313 23:31:43.128765   13619 kubeadm.go:309] 
	I0313 23:31:43.128881   13619 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0313 23:31:43.128994   13619 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0313 23:31:43.129005   13619 kubeadm.go:309] 
	I0313 23:31:43.129128   13619 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 0ab358.4sniocqn46g4w4je \
	I0313 23:31:43.129286   13619 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dcfb013506bb976cf0011fad52077f1655a7f497433f877bfd552b406ae615a2 \
	I0313 23:31:43.129320   13619 kubeadm.go:309] 	--control-plane 
	I0313 23:31:43.129330   13619 kubeadm.go:309] 
	I0313 23:31:43.129431   13619 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0313 23:31:43.129443   13619 kubeadm.go:309] 
	I0313 23:31:43.129547   13619 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 0ab358.4sniocqn46g4w4je \
	I0313 23:31:43.129692   13619 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dcfb013506bb976cf0011fad52077f1655a7f497433f877bfd552b406ae615a2 
	I0313 23:31:43.129706   13619 cni.go:84] Creating CNI manager for ""
	I0313 23:31:43.129716   13619 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0313 23:31:43.131982   13619 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0313 23:31:43.133616   13619 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0313 23:31:43.155604   13619 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0313 23:31:43.212812   13619 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0313 23:31:43.212918   13619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:31:43.212966   13619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-391283 minikube.k8s.io/updated_at=2024_03_13T23_31_43_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe minikube.k8s.io/name=addons-391283 minikube.k8s.io/primary=true
	I0313 23:31:43.387979   13619 ops.go:34] apiserver oom_adj: -16
	I0313 23:31:43.388141   13619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:31:43.888487   13619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:31:44.388892   13619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:31:44.889167   13619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:31:45.388368   13619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:31:45.888730   13619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:31:46.388900   13619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:31:46.888262   13619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:31:47.388301   13619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:31:47.888777   13619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:31:48.388407   13619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:31:48.889153   13619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:31:49.388730   13619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:31:49.888848   13619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:31:50.388544   13619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:31:50.888552   13619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:31:51.388659   13619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:31:51.888150   13619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:31:52.389076   13619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:31:52.889130   13619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:31:53.389152   13619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:31:53.888242   13619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:31:54.388193   13619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:31:54.889068   13619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:31:55.389004   13619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:31:55.888354   13619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0313 23:31:56.001864   13619 kubeadm.go:1106] duration metric: took 12.789012138s to wait for elevateKubeSystemPrivileges
	W0313 23:31:56.001904   13619 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0313 23:31:56.001914   13619 kubeadm.go:393] duration metric: took 23.350441497s to StartCluster
	I0313 23:31:56.001935   13619 settings.go:142] acquiring lock: {Name:mk42e7858dcb0bfd3fb8e811d2bde7ae9c665cc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:31:56.002063   13619 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18375-4922/kubeconfig
	I0313 23:31:56.002393   13619 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4922/kubeconfig: {Name:mk0167b2d32e012766f96da63dbd7a49eb849b8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:31:56.002622   13619 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0313 23:31:56.002627   13619 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.216 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0313 23:31:56.004482   13619 out.go:177] * Verifying Kubernetes components...
	I0313 23:31:56.002706   13619 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0313 23:31:56.002837   13619 config.go:182] Loaded profile config "addons-391283": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0313 23:31:56.006372   13619 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0313 23:31:56.006386   13619 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-391283"
	I0313 23:31:56.006391   13619 addons.go:69] Setting helm-tiller=true in profile "addons-391283"
	I0313 23:31:56.006404   13619 addons.go:69] Setting default-storageclass=true in profile "addons-391283"
	I0313 23:31:56.006416   13619 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-391283"
	I0313 23:31:56.006422   13619 addons.go:69] Setting ingress-dns=true in profile "addons-391283"
	I0313 23:31:56.006372   13619 addons.go:69] Setting yakd=true in profile "addons-391283"
	I0313 23:31:56.006427   13619 addons.go:69] Setting ingress=true in profile "addons-391283"
	I0313 23:31:56.006431   13619 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-391283"
	I0313 23:31:56.006424   13619 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-391283"
	I0313 23:31:56.006442   13619 addons.go:69] Setting storage-provisioner=true in profile "addons-391283"
	I0313 23:31:56.006451   13619 addons.go:234] Setting addon ingress=true in "addons-391283"
	I0313 23:31:56.006452   13619 host.go:66] Checking if "addons-391283" exists ...
	I0313 23:31:56.006461   13619 addons.go:234] Setting addon storage-provisioner=true in "addons-391283"
	I0313 23:31:56.006462   13619 addons.go:234] Setting addon yakd=true in "addons-391283"
	I0313 23:31:56.006479   13619 host.go:66] Checking if "addons-391283" exists ...
	I0313 23:31:56.006493   13619 host.go:66] Checking if "addons-391283" exists ...
	I0313 23:31:56.006498   13619 host.go:66] Checking if "addons-391283" exists ...
	I0313 23:31:56.006499   13619 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-391283"
	I0313 23:31:56.006530   13619 host.go:66] Checking if "addons-391283" exists ...
	I0313 23:31:56.006417   13619 addons.go:234] Setting addon helm-tiller=true in "addons-391283"
	I0313 23:31:56.006621   13619 host.go:66] Checking if "addons-391283" exists ...
	I0313 23:31:56.006690   13619 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-391283"
	I0313 23:31:56.006688   13619 addons.go:69] Setting volumesnapshots=true in profile "addons-391283"
	I0313 23:31:56.006723   13619 addons.go:234] Setting addon volumesnapshots=true in "addons-391283"
	I0313 23:31:56.006723   13619 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-391283"
	I0313 23:31:56.006764   13619 host.go:66] Checking if "addons-391283" exists ...
	I0313 23:31:56.006869   13619 addons.go:69] Setting registry=true in profile "addons-391283"
	I0313 23:31:56.006874   13619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:31:56.006890   13619 addons.go:234] Setting addon registry=true in "addons-391283"
	I0313 23:31:56.006889   13619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:31:56.006895   13619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:31:56.006910   13619 host.go:66] Checking if "addons-391283" exists ...
	I0313 23:31:56.006926   13619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:31:56.006955   13619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:31:56.006984   13619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:31:56.007100   13619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:31:56.007129   13619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:31:56.007150   13619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:31:56.007173   13619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:31:56.007253   13619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:31:56.007271   13619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:31:56.007475   13619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:31:56.007494   13619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:31:56.007526   13619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:31:56.006386   13619 addons.go:69] Setting inspektor-gadget=true in profile "addons-391283"
	I0313 23:31:56.007542   13619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:31:56.007565   13619 addons.go:234] Setting addon inspektor-gadget=true in "addons-391283"
	I0313 23:31:56.006438   13619 addons.go:234] Setting addon ingress-dns=true in "addons-391283"
	I0313 23:31:56.006372   13619 addons.go:69] Setting cloud-spanner=true in profile "addons-391283"
	I0313 23:31:56.007594   13619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:31:56.007603   13619 addons.go:234] Setting addon cloud-spanner=true in "addons-391283"
	I0313 23:31:56.006379   13619 addons.go:69] Setting metrics-server=true in profile "addons-391283"
	I0313 23:31:56.007613   13619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:31:56.007626   13619 addons.go:234] Setting addon metrics-server=true in "addons-391283"
	I0313 23:31:56.006387   13619 addons.go:69] Setting gcp-auth=true in profile "addons-391283"
	I0313 23:31:56.007644   13619 mustload.go:65] Loading cluster: addons-391283
	I0313 23:31:56.007872   13619 host.go:66] Checking if "addons-391283" exists ...
	I0313 23:31:56.007947   13619 host.go:66] Checking if "addons-391283" exists ...
	I0313 23:31:56.008214   13619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:31:56.008229   13619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:31:56.008261   13619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:31:56.008274   13619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:31:56.008280   13619 host.go:66] Checking if "addons-391283" exists ...
	I0313 23:31:56.008411   13619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:31:56.008448   13619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:31:56.008507   13619 host.go:66] Checking if "addons-391283" exists ...
	I0313 23:31:56.008867   13619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:31:56.008891   13619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:31:56.008894   13619 config.go:182] Loaded profile config "addons-391283": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0313 23:31:56.009276   13619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:31:56.009319   13619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:31:56.027507   13619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40357
	I0313 23:31:56.028045   13619 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:31:56.028655   13619 main.go:141] libmachine: Using API Version  1
	I0313 23:31:56.028679   13619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:31:56.029033   13619 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:31:56.029605   13619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:31:56.029654   13619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:31:56.033544   13619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33743
	I0313 23:31:56.035521   13619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:31:56.035562   13619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:31:56.042463   13619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43227
	I0313 23:31:56.042804   13619 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:31:56.042998   13619 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:31:56.043408   13619 main.go:141] libmachine: Using API Version  1
	I0313 23:31:56.043425   13619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:31:56.043730   13619 main.go:141] libmachine: Using API Version  1
	I0313 23:31:56.043746   13619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:31:56.044120   13619 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:31:56.044658   13619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:31:56.044694   13619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:31:56.044885   13619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36107
	I0313 23:31:56.045008   13619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41739
	I0313 23:31:56.045272   13619 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:31:56.045356   13619 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:31:56.045613   13619 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:31:56.045755   13619 main.go:141] libmachine: Using API Version  1
	I0313 23:31:56.045771   13619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:31:56.045797   13619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40817
	I0313 23:31:56.046099   13619 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:31:56.046288   13619 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:31:56.046313   13619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:31:56.046343   13619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:31:56.046427   13619 main.go:141] libmachine: Using API Version  1
	I0313 23:31:56.046446   13619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:31:56.046645   13619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:31:56.046697   13619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:31:56.046813   13619 main.go:141] libmachine: Using API Version  1
	I0313 23:31:56.046826   13619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:31:56.047174   13619 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:31:56.047667   13619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:31:56.047702   13619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:31:56.047891   13619 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:31:56.048428   13619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:31:56.048465   13619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:31:56.048626   13619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34575
	I0313 23:31:56.049045   13619 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:31:56.049558   13619 main.go:141] libmachine: Using API Version  1
	I0313 23:31:56.049579   13619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:31:56.050048   13619 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:31:56.050666   13619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:31:56.050762   13619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:31:56.056652   13619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40785
	I0313 23:31:56.057225   13619 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:31:56.057828   13619 main.go:141] libmachine: Using API Version  1
	I0313 23:31:56.057843   13619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:31:56.058213   13619 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:31:56.058406   13619 main.go:141] libmachine: (addons-391283) Calling .GetState
	I0313 23:31:56.061285   13619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38419
	I0313 23:31:56.062330   13619 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:31:56.062801   13619 main.go:141] libmachine: Using API Version  1
	I0313 23:31:56.062818   13619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:31:56.062887   13619 addons.go:234] Setting addon default-storageclass=true in "addons-391283"
	I0313 23:31:56.062933   13619 host.go:66] Checking if "addons-391283" exists ...
	I0313 23:31:56.063315   13619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:31:56.063327   13619 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:31:56.063351   13619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:31:56.063895   13619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:31:56.063927   13619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:31:56.064988   13619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39177
	I0313 23:31:56.065535   13619 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:31:56.066083   13619 main.go:141] libmachine: Using API Version  1
	I0313 23:31:56.066101   13619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:31:56.066472   13619 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:31:56.066995   13619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:31:56.067025   13619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:31:56.070059   13619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44759
	I0313 23:31:56.070445   13619 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:31:56.070913   13619 main.go:141] libmachine: Using API Version  1
	I0313 23:31:56.070944   13619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:31:56.071986   13619 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:31:56.072177   13619 main.go:141] libmachine: (addons-391283) Calling .GetState
	I0313 23:31:56.073004   13619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40611
	I0313 23:31:56.073317   13619 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:31:56.073814   13619 main.go:141] libmachine: Using API Version  1
	I0313 23:31:56.073830   13619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:31:56.074644   13619 main.go:141] libmachine: (addons-391283) Calling .DriverName
	I0313 23:31:56.076675   13619 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0313 23:31:56.078171   13619 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0313 23:31:56.078189   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0313 23:31:56.078208   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHHostname
	I0313 23:31:56.079382   13619 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:31:56.079632   13619 main.go:141] libmachine: (addons-391283) Calling .GetState
	I0313 23:31:56.081624   13619 host.go:66] Checking if "addons-391283" exists ...
	I0313 23:31:56.081949   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:56.082143   13619 main.go:141] libmachine: (addons-391283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:6b:c3", ip: ""} in network mk-addons-391283: {Iface:virbr1 ExpiryTime:2024-03-14 00:31:16 +0000 UTC Type:0 Mac:52:54:00:dc:6b:c3 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-391283 Clientid:01:52:54:00:dc:6b:c3}
	I0313 23:31:56.082161   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined IP address 192.168.39.216 and MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:56.082402   13619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:31:56.082446   13619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:31:56.082665   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHPort
	I0313 23:31:56.082898   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHKeyPath
	I0313 23:31:56.083123   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHUsername
	I0313 23:31:56.083316   13619 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4922/.minikube/machines/addons-391283/id_rsa Username:docker}
	I0313 23:31:56.089207   13619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42513
	I0313 23:31:56.089278   13619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44285
	I0313 23:31:56.089833   13619 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:31:56.089929   13619 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:31:56.090423   13619 main.go:141] libmachine: Using API Version  1
	I0313 23:31:56.090441   13619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:31:56.090858   13619 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:31:56.091255   13619 main.go:141] libmachine: (addons-391283) Calling .GetState
	I0313 23:31:56.091556   13619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38255
	I0313 23:31:56.092174   13619 main.go:141] libmachine: Using API Version  1
	I0313 23:31:56.092189   13619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:31:56.092512   13619 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:31:56.092593   13619 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:31:56.092912   13619 main.go:141] libmachine: Using API Version  1
	I0313 23:31:56.092926   13619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:31:56.093018   13619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38203
	I0313 23:31:56.093549   13619 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:31:56.093616   13619 main.go:141] libmachine: (addons-391283) Calling .GetState
	I0313 23:31:56.094530   13619 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-391283"
	I0313 23:31:56.094567   13619 host.go:66] Checking if "addons-391283" exists ...
	I0313 23:31:56.094866   13619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:31:56.094905   13619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:31:56.095373   13619 main.go:141] libmachine: (addons-391283) Calling .DriverName
	I0313 23:31:56.095451   13619 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:31:56.095581   13619 main.go:141] libmachine: Using API Version  1
	I0313 23:31:56.095593   13619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:31:56.097320   13619 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0313 23:31:56.095962   13619 main.go:141] libmachine: (addons-391283) Calling .GetState
	I0313 23:31:56.096547   13619 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:31:56.098801   13619 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0313 23:31:56.098813   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0313 23:31:56.098830   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHHostname
	I0313 23:31:56.099505   13619 main.go:141] libmachine: (addons-391283) Calling .GetState
	I0313 23:31:56.101094   13619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41411
	I0313 23:31:56.101291   13619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45589
	I0313 23:31:56.101747   13619 main.go:141] libmachine: (addons-391283) Calling .DriverName
	I0313 23:31:56.104126   13619 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0313 23:31:56.104199   13619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39437
	I0313 23:31:56.102355   13619 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:31:56.102714   13619 main.go:141] libmachine: (addons-391283) Calling .DriverName
	I0313 23:31:56.104051   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:56.105554   13619 main.go:141] libmachine: (addons-391283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:6b:c3", ip: ""} in network mk-addons-391283: {Iface:virbr1 ExpiryTime:2024-03-14 00:31:16 +0000 UTC Type:0 Mac:52:54:00:dc:6b:c3 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-391283 Clientid:01:52:54:00:dc:6b:c3}
	I0313 23:31:56.105578   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined IP address 192.168.39.216 and MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:56.102296   13619 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:31:56.105094   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHPort
	I0313 23:31:56.105380   13619 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0313 23:31:56.105638   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0313 23:31:56.105653   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHHostname
	I0313 23:31:56.105979   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHKeyPath
	I0313 23:31:56.107138   13619 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0313 23:31:56.109814   13619 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0313 23:31:56.106221   13619 main.go:141] libmachine: Using API Version  1
	I0313 23:31:56.109782   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHPort
	I0313 23:31:56.106319   13619 main.go:141] libmachine: Using API Version  1
	I0313 23:31:56.106363   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHUsername
	I0313 23:31:56.106602   13619 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:31:56.108843   13619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33395
	I0313 23:31:56.108989   13619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44123
	I0313 23:31:56.109166   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:56.109933   13619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39791
	I0313 23:31:56.113025   13619 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0313 23:31:56.111404   13619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:31:56.111444   13619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:31:56.111458   13619 main.go:141] libmachine: (addons-391283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:6b:c3", ip: ""} in network mk-addons-391283: {Iface:virbr1 ExpiryTime:2024-03-14 00:31:16 +0000 UTC Type:0 Mac:52:54:00:dc:6b:c3 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-391283 Clientid:01:52:54:00:dc:6b:c3}
	I0313 23:31:56.111583   13619 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4922/.minikube/machines/addons-391283/id_rsa Username:docker}
	I0313 23:31:56.111616   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHKeyPath
	I0313 23:31:56.111846   13619 main.go:141] libmachine: Using API Version  1
	I0313 23:31:56.111895   13619 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:31:56.112049   13619 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:31:56.112112   13619 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:31:56.112528   13619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36059
	I0313 23:31:56.114996   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined IP address 192.168.39.216 and MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:56.115088   13619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:31:56.115090   13619 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0313 23:31:56.115103   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0313 23:31:56.115118   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHHostname
	I0313 23:31:56.115715   13619 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:31:56.115760   13619 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:31:56.115805   13619 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:31:56.115805   13619 main.go:141] libmachine: Using API Version  1
	I0313 23:31:56.115817   13619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:31:56.115869   13619 main.go:141] libmachine: (addons-391283) Calling .DriverName
	I0313 23:31:56.115975   13619 main.go:141] libmachine: Using API Version  1
	I0313 23:31:56.115990   13619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:31:56.116459   13619 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:31:56.116479   13619 main.go:141] libmachine: Using API Version  1
	I0313 23:31:56.116494   13619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:31:56.116536   13619 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:31:56.116556   13619 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:31:56.116745   13619 main.go:141] libmachine: (addons-391283) Calling .GetState
	I0313 23:31:56.116799   13619 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:31:56.117139   13619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:31:56.117169   13619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:31:56.117206   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHUsername
	I0313 23:31:56.117434   13619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43557
	I0313 23:31:56.117450   13619 main.go:141] libmachine: (addons-391283) Calling .GetState
	I0313 23:31:56.117792   13619 main.go:141] libmachine: (addons-391283) Calling .GetState
	I0313 23:31:56.117857   13619 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4922/.minikube/machines/addons-391283/id_rsa Username:docker}
	I0313 23:31:56.118310   13619 main.go:141] libmachine: (addons-391283) Calling .GetState
	I0313 23:31:56.118373   13619 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:31:56.119004   13619 main.go:141] libmachine: Using API Version  1
	I0313 23:31:56.119020   13619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:31:56.119127   13619 main.go:141] libmachine: (addons-391283) Calling .DriverName
	I0313 23:31:56.121257   13619 out.go:177]   - Using image docker.io/registry:2.8.3
	I0313 23:31:56.120061   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:56.120102   13619 main.go:141] libmachine: (addons-391283) Calling .DriverName
	I0313 23:31:56.120214   13619 main.go:141] libmachine: Using API Version  1
	I0313 23:31:56.120827   13619 main.go:141] libmachine: (addons-391283) Calling .DriverName
	I0313 23:31:56.120983   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHPort
	I0313 23:31:56.121009   13619 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:31:56.122009   13619 main.go:141] libmachine: (addons-391283) Calling .DriverName
	I0313 23:31:56.122190   13619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44069
	I0313 23:31:56.124278   13619 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0313 23:31:56.122735   13619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:31:56.122801   13619 main.go:141] libmachine: (addons-391283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:6b:c3", ip: ""} in network mk-addons-391283: {Iface:virbr1 ExpiryTime:2024-03-14 00:31:16 +0000 UTC Type:0 Mac:52:54:00:dc:6b:c3 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-391283 Clientid:01:52:54:00:dc:6b:c3}
	I0313 23:31:56.123078   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHKeyPath
	I0313 23:31:56.123117   13619 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:31:56.123994   13619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:31:56.124791   13619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34507
	I0313 23:31:56.125457   13619 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0313 23:31:56.125471   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined IP address 192.168.39.216 and MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:56.125662   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHUsername
	I0313 23:31:56.125858   13619 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:31:56.126558   13619 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0313 23:31:56.126566   13619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43735
	I0313 23:31:56.127539   13619 main.go:141] libmachine: Using API Version  1
	I0313 23:31:56.128365   13619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:31:56.128373   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0313 23:31:56.129710   13619 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0313 23:31:56.129725   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0313 23:31:56.129737   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHHostname
	I0313 23:31:56.128367   13619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:31:56.128329   13619 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0313 23:31:56.128391   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHHostname
	I0313 23:31:56.128270   13619 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0313 23:31:56.128548   13619 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4922/.minikube/machines/addons-391283/id_rsa Username:docker}
	I0313 23:31:56.128904   13619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:31:56.128936   13619 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:31:56.128958   13619 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:31:56.129148   13619 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:31:56.131433   13619 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0313 23:31:56.131555   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0313 23:31:56.131572   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHHostname
	I0313 23:31:56.133590   13619 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0313 23:31:56.131707   13619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:31:56.132119   13619 main.go:141] libmachine: Using API Version  1
	I0313 23:31:56.132740   13619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:31:56.134762   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:56.134873   13619 main.go:141] libmachine: Using API Version  1
	I0313 23:31:56.135126   13619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:31:56.135177   13619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:31:56.135237   13619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:31:56.136814   13619 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0313 23:31:56.135639   13619 main.go:141] libmachine: (addons-391283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:6b:c3", ip: ""} in network mk-addons-391283: {Iface:virbr1 ExpiryTime:2024-03-14 00:31:16 +0000 UTC Type:0 Mac:52:54:00:dc:6b:c3 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-391283 Clientid:01:52:54:00:dc:6b:c3}
	I0313 23:31:56.135672   13619 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:31:56.135845   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHPort
	I0313 23:31:56.136181   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:56.136975   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHPort
	I0313 23:31:56.136996   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:56.137097   13619 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:31:56.138059   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined IP address 192.168.39.216 and MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:56.137511   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHPort
	I0313 23:31:56.139378   13619 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0313 23:31:56.138097   13619 main.go:141] libmachine: (addons-391283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:6b:c3", ip: ""} in network mk-addons-391283: {Iface:virbr1 ExpiryTime:2024-03-14 00:31:16 +0000 UTC Type:0 Mac:52:54:00:dc:6b:c3 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-391283 Clientid:01:52:54:00:dc:6b:c3}
	I0313 23:31:56.138145   13619 main.go:141] libmachine: (addons-391283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:6b:c3", ip: ""} in network mk-addons-391283: {Iface:virbr1 ExpiryTime:2024-03-14 00:31:16 +0000 UTC Type:0 Mac:52:54:00:dc:6b:c3 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-391283 Clientid:01:52:54:00:dc:6b:c3}
	I0313 23:31:56.138341   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHKeyPath
	I0313 23:31:56.138359   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHKeyPath
	I0313 23:31:56.138377   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHKeyPath
	I0313 23:31:56.138614   13619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:31:56.139692   13619 main.go:141] libmachine: (addons-391283) Calling .GetState
	I0313 23:31:56.140668   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined IP address 192.168.39.216 and MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:56.142270   13619 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0313 23:31:56.140730   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined IP address 192.168.39.216 and MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:56.140752   13619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:31:56.140942   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHUsername
	I0313 23:31:56.140961   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHUsername
	I0313 23:31:56.140977   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHUsername
	I0313 23:31:56.142878   13619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45783
	I0313 23:31:56.145071   13619 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0313 23:31:56.142909   13619 main.go:141] libmachine: (addons-391283) Calling .DriverName
	I0313 23:31:56.143980   13619 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4922/.minikube/machines/addons-391283/id_rsa Username:docker}
	I0313 23:31:56.144217   13619 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4922/.minikube/machines/addons-391283/id_rsa Username:docker}
	I0313 23:31:56.144253   13619 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4922/.minikube/machines/addons-391283/id_rsa Username:docker}
	I0313 23:31:56.144380   13619 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:31:56.148273   13619 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0313 23:31:56.147272   13619 main.go:141] libmachine: Using API Version  1
	I0313 23:31:56.151096   13619 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0313 23:31:56.149760   13619 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0313 23:31:56.149774   13619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:31:56.149897   13619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43171
	I0313 23:31:56.152310   13619 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0313 23:31:56.152322   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0313 23:31:56.152337   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHHostname
	I0313 23:31:56.153557   13619 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0313 23:31:56.153567   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0313 23:31:56.152588   13619 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:31:56.153589   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHHostname
	I0313 23:31:56.152716   13619 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:31:56.153787   13619 main.go:141] libmachine: (addons-391283) Calling .GetState
	I0313 23:31:56.154237   13619 main.go:141] libmachine: Using API Version  1
	I0313 23:31:56.154253   13619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:31:56.155118   13619 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:31:56.155457   13619 main.go:141] libmachine: (addons-391283) Calling .GetState
	I0313 23:31:56.157614   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:56.157740   13619 main.go:141] libmachine: (addons-391283) Calling .DriverName
	I0313 23:31:56.157802   13619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41693
	I0313 23:31:56.159345   13619 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0313 23:31:56.158139   13619 main.go:141] libmachine: (addons-391283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:6b:c3", ip: ""} in network mk-addons-391283: {Iface:virbr1 ExpiryTime:2024-03-14 00:31:16 +0000 UTC Type:0 Mac:52:54:00:dc:6b:c3 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-391283 Clientid:01:52:54:00:dc:6b:c3}
	I0313 23:31:56.158303   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHPort
	I0313 23:31:56.158383   13619 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:31:56.158445   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:56.159126   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHPort
	I0313 23:31:56.159142   13619 main.go:141] libmachine: (addons-391283) Calling .DriverName
	I0313 23:31:56.162521   13619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34659
	I0313 23:31:56.163753   13619 main.go:141] libmachine: (addons-391283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:6b:c3", ip: ""} in network mk-addons-391283: {Iface:virbr1 ExpiryTime:2024-03-14 00:31:16 +0000 UTC Type:0 Mac:52:54:00:dc:6b:c3 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-391283 Clientid:01:52:54:00:dc:6b:c3}
	I0313 23:31:56.163773   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined IP address 192.168.39.216 and MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:56.163788   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined IP address 192.168.39.216 and MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:56.163822   13619 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0313 23:31:56.163836   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0313 23:31:56.163852   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHHostname
	I0313 23:31:56.164730   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHKeyPath
	I0313 23:31:56.164733   13619 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:31:56.166404   13619 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0313 23:31:56.164804   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHKeyPath
	I0313 23:31:56.167820   13619 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0313 23:31:56.164844   13619 main.go:141] libmachine: Using API Version  1
	I0313 23:31:56.164988   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHUsername
	I0313 23:31:56.165346   13619 main.go:141] libmachine: Using API Version  1
	I0313 23:31:56.166330   13619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32933
	I0313 23:31:56.166568   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHUsername
	I0313 23:31:56.166978   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:56.167842   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0313 23:31:56.167859   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHHostname
	I0313 23:31:56.167870   13619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:31:56.167860   13619 main.go:141] libmachine: (addons-391283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:6b:c3", ip: ""} in network mk-addons-391283: {Iface:virbr1 ExpiryTime:2024-03-14 00:31:16 +0000 UTC Type:0 Mac:52:54:00:dc:6b:c3 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-391283 Clientid:01:52:54:00:dc:6b:c3}
	I0313 23:31:56.167912   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined IP address 192.168.39.216 and MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:56.167570   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHPort
	I0313 23:31:56.167928   13619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:31:56.168067   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHKeyPath
	I0313 23:31:56.168075   13619 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4922/.minikube/machines/addons-391283/id_rsa Username:docker}
	I0313 23:31:56.168243   13619 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:31:56.168313   13619 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4922/.minikube/machines/addons-391283/id_rsa Username:docker}
	I0313 23:31:56.168451   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHUsername
	I0313 23:31:56.168562   13619 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4922/.minikube/machines/addons-391283/id_rsa Username:docker}
	I0313 23:31:56.168640   13619 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:31:56.168761   13619 main.go:141] libmachine: (addons-391283) Calling .GetState
	I0313 23:31:56.168803   13619 main.go:141] libmachine: (addons-391283) Calling .GetState
	I0313 23:31:56.169109   13619 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:31:56.169624   13619 main.go:141] libmachine: Using API Version  1
	I0313 23:31:56.169646   13619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:31:56.170661   13619 main.go:141] libmachine: (addons-391283) Calling .DriverName
	I0313 23:31:56.172258   13619 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0313 23:31:56.170910   13619 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:31:56.171326   13619 main.go:141] libmachine: (addons-391283) Calling .DriverName
	I0313 23:31:56.171518   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:56.171984   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHPort
	I0313 23:31:56.174756   13619 out.go:177]   - Using image docker.io/busybox:stable
	I0313 23:31:56.173473   13619 main.go:141] libmachine: (addons-391283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:6b:c3", ip: ""} in network mk-addons-391283: {Iface:virbr1 ExpiryTime:2024-03-14 00:31:16 +0000 UTC Type:0 Mac:52:54:00:dc:6b:c3 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-391283 Clientid:01:52:54:00:dc:6b:c3}
	I0313 23:31:56.173531   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHKeyPath
	I0313 23:31:56.173612   13619 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0313 23:31:56.173701   13619 main.go:141] libmachine: (addons-391283) Calling .GetState
	I0313 23:31:56.176314   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined IP address 192.168.39.216 and MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:56.176359   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0313 23:31:56.176377   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHHostname
	I0313 23:31:56.176422   13619 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0313 23:31:56.176432   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0313 23:31:56.176446   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHHostname
	I0313 23:31:56.176502   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHUsername
	I0313 23:31:56.176900   13619 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4922/.minikube/machines/addons-391283/id_rsa Username:docker}
	I0313 23:31:56.179163   13619 main.go:141] libmachine: (addons-391283) Calling .DriverName
	I0313 23:31:56.181153   13619 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0313 23:31:56.179781   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:56.180202   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:56.180426   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHPort
	I0313 23:31:56.180778   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHPort
	I0313 23:31:56.182538   13619 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0313 23:31:56.182545   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0313 23:31:56.182555   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHHostname
	I0313 23:31:56.182595   13619 main.go:141] libmachine: (addons-391283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:6b:c3", ip: ""} in network mk-addons-391283: {Iface:virbr1 ExpiryTime:2024-03-14 00:31:16 +0000 UTC Type:0 Mac:52:54:00:dc:6b:c3 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-391283 Clientid:01:52:54:00:dc:6b:c3}
	I0313 23:31:56.182608   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined IP address 192.168.39.216 and MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:56.182622   13619 main.go:141] libmachine: (addons-391283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:6b:c3", ip: ""} in network mk-addons-391283: {Iface:virbr1 ExpiryTime:2024-03-14 00:31:16 +0000 UTC Type:0 Mac:52:54:00:dc:6b:c3 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-391283 Clientid:01:52:54:00:dc:6b:c3}
	I0313 23:31:56.182632   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined IP address 192.168.39.216 and MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:56.182629   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHKeyPath
	I0313 23:31:56.182739   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHUsername
	I0313 23:31:56.182780   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHKeyPath
	I0313 23:31:56.182880   13619 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4922/.minikube/machines/addons-391283/id_rsa Username:docker}
	I0313 23:31:56.183094   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHUsername
	I0313 23:31:56.183245   13619 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4922/.minikube/machines/addons-391283/id_rsa Username:docker}
	I0313 23:31:56.185133   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:56.185461   13619 main.go:141] libmachine: (addons-391283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:6b:c3", ip: ""} in network mk-addons-391283: {Iface:virbr1 ExpiryTime:2024-03-14 00:31:16 +0000 UTC Type:0 Mac:52:54:00:dc:6b:c3 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-391283 Clientid:01:52:54:00:dc:6b:c3}
	I0313 23:31:56.185474   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined IP address 192.168.39.216 and MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:31:56.185595   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHPort
	I0313 23:31:56.185721   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHKeyPath
	I0313 23:31:56.185842   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHUsername
	I0313 23:31:56.185934   13619 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4922/.minikube/machines/addons-391283/id_rsa Username:docker}
	W0313 23:31:56.192163   13619 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:40850->192.168.39.216:22: read: connection reset by peer
	I0313 23:31:56.192189   13619 retry.go:31] will retry after 359.799295ms: ssh: handshake failed: read tcp 192.168.39.1:40850->192.168.39.216:22: read: connection reset by peer
	I0313 23:31:56.712695   13619 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0313 23:31:56.788514   13619 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0313 23:31:56.788542   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0313 23:31:56.845685   13619 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0313 23:31:56.873635   13619 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0313 23:31:56.891455   13619 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0313 23:31:56.891476   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0313 23:31:56.893246   13619 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0313 23:31:56.893332   13619 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0313 23:31:57.028650   13619 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0313 23:31:57.028670   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0313 23:31:57.040166   13619 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0313 23:31:57.040190   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0313 23:31:57.052828   13619 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0313 23:31:57.052844   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0313 23:31:57.128763   13619 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0313 23:31:57.156177   13619 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0313 23:31:57.156195   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0313 23:31:57.190933   13619 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0313 23:31:57.190964   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0313 23:31:57.201088   13619 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0313 23:31:57.267024   13619 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0313 23:31:57.267055   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0313 23:31:57.421933   13619 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0313 23:31:57.421955   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0313 23:31:57.425619   13619 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0313 23:31:57.473753   13619 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0313 23:31:57.508496   13619 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0313 23:31:57.508515   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0313 23:31:57.518532   13619 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0313 23:31:57.518558   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0313 23:31:57.534767   13619 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0313 23:31:57.534789   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0313 23:31:57.563025   13619 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0313 23:31:57.563057   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0313 23:31:57.589381   13619 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0313 23:31:57.589405   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0313 23:31:57.599456   13619 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0313 23:31:57.599479   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0313 23:31:57.765041   13619 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0313 23:31:57.765067   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0313 23:31:57.794897   13619 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0313 23:31:57.794924   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0313 23:31:57.804935   13619 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0313 23:31:57.818773   13619 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0313 23:31:57.827465   13619 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0313 23:31:57.827486   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0313 23:31:57.828482   13619 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0313 23:31:57.828500   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0313 23:31:57.835719   13619 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0313 23:31:57.835736   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0313 23:31:58.004029   13619 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0313 23:31:58.050227   13619 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0313 23:31:58.050249   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0313 23:31:58.051537   13619 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0313 23:31:58.051559   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0313 23:31:58.061071   13619 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0313 23:31:58.061088   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0313 23:31:58.082718   13619 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0313 23:31:58.082740   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0313 23:31:58.296880   13619 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0313 23:31:58.296903   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0313 23:31:58.308283   13619 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0313 23:31:58.320466   13619 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0313 23:31:58.320486   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0313 23:31:58.398000   13619 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0313 23:31:58.398026   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0313 23:31:58.600945   13619 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0313 23:31:58.613819   13619 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0313 23:31:58.613842   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0313 23:31:58.626047   13619 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0313 23:31:58.626069   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0313 23:31:58.739122   13619 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0313 23:31:58.739142   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0313 23:31:58.791981   13619 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0313 23:31:58.791998   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0313 23:31:58.857801   13619 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0313 23:31:58.878270   13619 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0313 23:31:58.878296   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0313 23:31:58.957659   13619 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0313 23:31:58.957687   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0313 23:31:59.013584   13619 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0313 23:32:00.893736   13619 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.180998918s)
	I0313 23:32:00.893786   13619 main.go:141] libmachine: Making call to close driver server
	I0313 23:32:00.893795   13619 main.go:141] libmachine: (addons-391283) Calling .Close
	I0313 23:32:00.894109   13619 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:32:00.894157   13619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:32:00.894173   13619 main.go:141] libmachine: Making call to close driver server
	I0313 23:32:00.894183   13619 main.go:141] libmachine: (addons-391283) Calling .Close
	I0313 23:32:00.894415   13619 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:32:00.894433   13619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:32:00.894461   13619 main.go:141] libmachine: (addons-391283) DBG | Closing plugin on server side
	I0313 23:32:02.835842   13619 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0313 23:32:02.835879   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHHostname
	I0313 23:32:02.839235   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:32:02.839694   13619 main.go:141] libmachine: (addons-391283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:6b:c3", ip: ""} in network mk-addons-391283: {Iface:virbr1 ExpiryTime:2024-03-14 00:31:16 +0000 UTC Type:0 Mac:52:54:00:dc:6b:c3 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-391283 Clientid:01:52:54:00:dc:6b:c3}
	I0313 23:32:02.839716   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined IP address 192.168.39.216 and MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:32:02.839937   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHPort
	I0313 23:32:02.840119   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHKeyPath
	I0313 23:32:02.840321   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHUsername
	I0313 23:32:02.840471   13619 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4922/.minikube/machines/addons-391283/id_rsa Username:docker}
	I0313 23:32:03.142995   13619 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.29727974s)
	I0313 23:32:03.143050   13619 main.go:141] libmachine: Making call to close driver server
	I0313 23:32:03.143065   13619 main.go:141] libmachine: (addons-391283) Calling .Close
	I0313 23:32:03.143327   13619 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:32:03.143347   13619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:32:03.143360   13619 main.go:141] libmachine: Making call to close driver server
	I0313 23:32:03.143368   13619 main.go:141] libmachine: (addons-391283) Calling .Close
	I0313 23:32:03.143565   13619 main.go:141] libmachine: (addons-391283) DBG | Closing plugin on server side
	I0313 23:32:03.143599   13619 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:32:03.143617   13619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:32:03.258912   13619 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0313 23:32:03.313275   13619 addons.go:234] Setting addon gcp-auth=true in "addons-391283"
	I0313 23:32:03.313328   13619 host.go:66] Checking if "addons-391283" exists ...
	I0313 23:32:03.313637   13619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:32:03.313664   13619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:32:03.350967   13619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38937
	I0313 23:32:03.351401   13619 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:32:03.351917   13619 main.go:141] libmachine: Using API Version  1
	I0313 23:32:03.351937   13619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:32:03.352288   13619 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:32:03.352763   13619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:32:03.352787   13619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:32:03.368488   13619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35877
	I0313 23:32:03.368935   13619 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:32:03.369437   13619 main.go:141] libmachine: Using API Version  1
	I0313 23:32:03.369456   13619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:32:03.369764   13619 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:32:03.369973   13619 main.go:141] libmachine: (addons-391283) Calling .GetState
	I0313 23:32:03.371674   13619 main.go:141] libmachine: (addons-391283) Calling .DriverName
	I0313 23:32:03.371888   13619 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0313 23:32:03.371914   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHHostname
	I0313 23:32:03.374684   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:32:03.375167   13619 main.go:141] libmachine: (addons-391283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:6b:c3", ip: ""} in network mk-addons-391283: {Iface:virbr1 ExpiryTime:2024-03-14 00:31:16 +0000 UTC Type:0 Mac:52:54:00:dc:6b:c3 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-391283 Clientid:01:52:54:00:dc:6b:c3}
	I0313 23:32:03.375193   13619 main.go:141] libmachine: (addons-391283) DBG | domain addons-391283 has defined IP address 192.168.39.216 and MAC address 52:54:00:dc:6b:c3 in network mk-addons-391283
	I0313 23:32:03.375381   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHPort
	I0313 23:32:03.375531   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHKeyPath
	I0313 23:32:03.375654   13619 main.go:141] libmachine: (addons-391283) Calling .GetSSHUsername
	I0313 23:32:03.375767   13619 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4922/.minikube/machines/addons-391283/id_rsa Username:docker}
	I0313 23:32:04.080558   13619 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.187199014s)
	I0313 23:32:04.080598   13619 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0313 23:32:04.080615   13619 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.187342575s)
	I0313 23:32:04.080687   13619 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.951894893s)
	I0313 23:32:04.080567   13619 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.206888673s)
	I0313 23:32:04.080738   13619 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.879623337s)
	I0313 23:32:04.080733   13619 main.go:141] libmachine: Making call to close driver server
	I0313 23:32:04.080763   13619 main.go:141] libmachine: Making call to close driver server
	I0313 23:32:04.080764   13619 main.go:141] libmachine: Making call to close driver server
	I0313 23:32:04.080765   13619 main.go:141] libmachine: (addons-391283) Calling .Close
	I0313 23:32:04.080773   13619 main.go:141] libmachine: (addons-391283) Calling .Close
	I0313 23:32:04.080775   13619 main.go:141] libmachine: (addons-391283) Calling .Close
	I0313 23:32:04.081226   13619 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:32:04.081228   13619 main.go:141] libmachine: (addons-391283) DBG | Closing plugin on server side
	I0313 23:32:04.081240   13619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:32:04.081250   13619 main.go:141] libmachine: Making call to close driver server
	I0313 23:32:04.081255   13619 main.go:141] libmachine: (addons-391283) DBG | Closing plugin on server side
	I0313 23:32:04.081258   13619 main.go:141] libmachine: (addons-391283) Calling .Close
	I0313 23:32:04.081270   13619 main.go:141] libmachine: (addons-391283) DBG | Closing plugin on server side
	I0313 23:32:04.081300   13619 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:32:04.081307   13619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:32:04.081315   13619 main.go:141] libmachine: Making call to close driver server
	I0313 23:32:04.081322   13619 main.go:141] libmachine: (addons-391283) Calling .Close
	I0313 23:32:04.081416   13619 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:32:04.081433   13619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:32:04.081442   13619 main.go:141] libmachine: Making call to close driver server
	I0313 23:32:04.081449   13619 main.go:141] libmachine: (addons-391283) Calling .Close
	I0313 23:32:04.081525   13619 node_ready.go:35] waiting up to 6m0s for node "addons-391283" to be "Ready" ...
	I0313 23:32:04.081617   13619 main.go:141] libmachine: (addons-391283) DBG | Closing plugin on server side
	I0313 23:32:04.081645   13619 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:32:04.081652   13619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:32:04.081699   13619 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:32:04.081709   13619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:32:04.082065   13619 main.go:141] libmachine: (addons-391283) DBG | Closing plugin on server side
	I0313 23:32:04.082105   13619 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:32:04.082113   13619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:32:04.128043   13619 node_ready.go:49] node "addons-391283" has status "Ready":"True"
	I0313 23:32:04.128065   13619 node_ready.go:38] duration metric: took 46.524917ms for node "addons-391283" to be "Ready" ...
	I0313 23:32:04.128075   13619 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0313 23:32:04.211435   13619 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5kr62" in "kube-system" namespace to be "Ready" ...
	I0313 23:32:04.261611   13619 main.go:141] libmachine: Making call to close driver server
	I0313 23:32:04.261634   13619 main.go:141] libmachine: (addons-391283) Calling .Close
	I0313 23:32:04.261900   13619 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:32:04.261916   13619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:32:04.605150   13619 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-391283" context rescaled to 1 replicas
	I0313 23:32:05.814640   13619 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.388980098s)
	I0313 23:32:05.814654   13619 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.340864485s)
	I0313 23:32:05.814686   13619 main.go:141] libmachine: Making call to close driver server
	I0313 23:32:05.814695   13619 main.go:141] libmachine: Making call to close driver server
	I0313 23:32:05.814698   13619 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.009733664s)
	I0313 23:32:05.814712   13619 main.go:141] libmachine: (addons-391283) Calling .Close
	I0313 23:32:05.814734   13619 main.go:141] libmachine: Making call to close driver server
	I0313 23:32:05.814746   13619 main.go:141] libmachine: (addons-391283) Calling .Close
	I0313 23:32:05.814746   13619 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.995944492s)
	I0313 23:32:05.814765   13619 main.go:141] libmachine: Making call to close driver server
	I0313 23:32:05.814783   13619 main.go:141] libmachine: (addons-391283) Calling .Close
	I0313 23:32:05.814699   13619 main.go:141] libmachine: (addons-391283) Calling .Close
	I0313 23:32:05.814850   13619 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.81079513s)
	I0313 23:32:05.814873   13619 main.go:141] libmachine: Making call to close driver server
	I0313 23:32:05.814882   13619 main.go:141] libmachine: (addons-391283) Calling .Close
	I0313 23:32:05.814951   13619 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.506639653s)
	I0313 23:32:05.814972   13619 main.go:141] libmachine: Making call to close driver server
	I0313 23:32:05.814980   13619 main.go:141] libmachine: (addons-391283) Calling .Close
	I0313 23:32:05.815052   13619 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.214062781s)
	W0313 23:32:05.815084   13619 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0313 23:32:05.815103   13619 retry.go:31] will retry after 134.284805ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0313 23:32:05.815177   13619 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.957337841s)
	I0313 23:32:05.815196   13619 main.go:141] libmachine: Making call to close driver server
	I0313 23:32:05.815207   13619 main.go:141] libmachine: (addons-391283) Calling .Close
	I0313 23:32:05.815485   13619 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:32:05.815511   13619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:32:05.815517   13619 main.go:141] libmachine: (addons-391283) DBG | Closing plugin on server side
	I0313 23:32:05.815524   13619 main.go:141] libmachine: Making call to close driver server
	I0313 23:32:05.815536   13619 main.go:141] libmachine: (addons-391283) Calling .Close
	I0313 23:32:05.815545   13619 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:32:05.815554   13619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:32:05.815562   13619 main.go:141] libmachine: Making call to close driver server
	I0313 23:32:05.815568   13619 main.go:141] libmachine: (addons-391283) Calling .Close
	I0313 23:32:05.815623   13619 main.go:141] libmachine: (addons-391283) DBG | Closing plugin on server side
	I0313 23:32:05.815653   13619 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:32:05.815662   13619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:32:05.815671   13619 main.go:141] libmachine: Making call to close driver server
	I0313 23:32:05.815679   13619 main.go:141] libmachine: (addons-391283) Calling .Close
	I0313 23:32:05.815808   13619 main.go:141] libmachine: (addons-391283) DBG | Closing plugin on server side
	I0313 23:32:05.815816   13619 main.go:141] libmachine: (addons-391283) DBG | Closing plugin on server side
	I0313 23:32:05.815838   13619 main.go:141] libmachine: (addons-391283) DBG | Closing plugin on server side
	I0313 23:32:05.815845   13619 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:32:05.815854   13619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:32:05.815860   13619 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:32:05.815867   13619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:32:05.815868   13619 addons.go:470] Verifying addon metrics-server=true in "addons-391283"
	I0313 23:32:05.815875   13619 main.go:141] libmachine: Making call to close driver server
	I0313 23:32:05.815882   13619 main.go:141] libmachine: (addons-391283) Calling .Close
	I0313 23:32:05.815916   13619 main.go:141] libmachine: (addons-391283) DBG | Closing plugin on server side
	I0313 23:32:05.815942   13619 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:32:05.815949   13619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:32:05.815958   13619 main.go:141] libmachine: Making call to close driver server
	I0313 23:32:05.815966   13619 main.go:141] libmachine: (addons-391283) Calling .Close
	I0313 23:32:05.816056   13619 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:32:05.816066   13619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:32:05.816076   13619 main.go:141] libmachine: Making call to close driver server
	I0313 23:32:05.816084   13619 main.go:141] libmachine: (addons-391283) Calling .Close
	I0313 23:32:05.816901   13619 main.go:141] libmachine: (addons-391283) DBG | Closing plugin on server side
	I0313 23:32:05.816934   13619 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:32:05.816941   13619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:32:05.816949   13619 main.go:141] libmachine: Making call to close driver server
	I0313 23:32:05.816956   13619 main.go:141] libmachine: (addons-391283) Calling .Close
	I0313 23:32:05.817009   13619 main.go:141] libmachine: (addons-391283) DBG | Closing plugin on server side
	I0313 23:32:05.817026   13619 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:32:05.817032   13619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:32:05.817040   13619 addons.go:470] Verifying addon registry=true in "addons-391283"
	I0313 23:32:05.819054   13619 out.go:177] * Verifying registry addon...
	I0313 23:32:05.818097   13619 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:32:05.818110   13619 main.go:141] libmachine: (addons-391283) DBG | Closing plugin on server side
	I0313 23:32:05.818125   13619 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:32:05.818136   13619 main.go:141] libmachine: (addons-391283) DBG | Closing plugin on server side
	I0313 23:32:05.818150   13619 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:32:05.818163   13619 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:32:05.818173   13619 main.go:141] libmachine: (addons-391283) DBG | Closing plugin on server side
	I0313 23:32:05.818428   13619 main.go:141] libmachine: (addons-391283) DBG | Closing plugin on server side
	I0313 23:32:05.818452   13619 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:32:05.820945   13619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:32:05.820959   13619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:32:05.820969   13619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:32:05.822586   13619 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-391283 service yakd-dashboard -n yakd-dashboard
	
	I0313 23:32:05.821024   13619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:32:05.821037   13619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:32:05.821632   13619 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0313 23:32:05.822680   13619 addons.go:470] Verifying addon ingress=true in "addons-391283"
	I0313 23:32:05.825572   13619 out.go:177] * Verifying ingress addon...
	I0313 23:32:05.827302   13619 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0313 23:32:05.873202   13619 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0313 23:32:05.873222   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:05.873296   13619 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0313 23:32:05.873320   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:05.918886   13619 main.go:141] libmachine: Making call to close driver server
	I0313 23:32:05.918907   13619 main.go:141] libmachine: (addons-391283) Calling .Close
	I0313 23:32:05.919377   13619 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:32:05.919400   13619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:32:05.950295   13619 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0313 23:32:06.224975   13619 pod_ready.go:102] pod "coredns-5dd5756b68-5kr62" in "kube-system" namespace has status "Ready":"False"
	I0313 23:32:06.391578   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:06.403780   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:06.864801   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:06.865280   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:06.929838   13619 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.557921501s)
	I0313 23:32:06.931234   13619 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0313 23:32:06.930054   13619 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.916423159s)
	I0313 23:32:06.932615   13619 main.go:141] libmachine: Making call to close driver server
	I0313 23:32:06.933925   13619 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.1
	I0313 23:32:06.932629   13619 main.go:141] libmachine: (addons-391283) Calling .Close
	I0313 23:32:06.935142   13619 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0313 23:32:06.935160   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0313 23:32:06.935330   13619 main.go:141] libmachine: (addons-391283) DBG | Closing plugin on server side
	I0313 23:32:06.935377   13619 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:32:06.935389   13619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:32:06.935404   13619 main.go:141] libmachine: Making call to close driver server
	I0313 23:32:06.935415   13619 main.go:141] libmachine: (addons-391283) Calling .Close
	I0313 23:32:06.935658   13619 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:32:06.935696   13619 main.go:141] libmachine: (addons-391283) DBG | Closing plugin on server side
	I0313 23:32:06.935703   13619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:32:06.935713   13619 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-391283"
	I0313 23:32:06.937121   13619 out.go:177] * Verifying csi-hostpath-driver addon...
	I0313 23:32:06.939260   13619 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0313 23:32:06.970532   13619 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0313 23:32:06.970556   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:06.985735   13619 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0313 23:32:06.985753   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0313 23:32:07.028456   13619 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0313 23:32:07.028471   13619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0313 23:32:07.072302   13619 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0313 23:32:07.351197   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:07.402297   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:07.588377   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:07.830262   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:07.836263   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:07.944944   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:08.333986   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:08.337232   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:08.445011   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:08.730513   13619 pod_ready.go:102] pod "coredns-5dd5756b68-5kr62" in "kube-system" namespace has status "Ready":"False"
	I0313 23:32:08.801114   13619 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.850783245s)
	I0313 23:32:08.801167   13619 main.go:141] libmachine: Making call to close driver server
	I0313 23:32:08.801181   13619 main.go:141] libmachine: (addons-391283) Calling .Close
	I0313 23:32:08.801438   13619 main.go:141] libmachine: (addons-391283) DBG | Closing plugin on server side
	I0313 23:32:08.801473   13619 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:32:08.801480   13619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:32:08.801489   13619 main.go:141] libmachine: Making call to close driver server
	I0313 23:32:08.801497   13619 main.go:141] libmachine: (addons-391283) Calling .Close
	I0313 23:32:08.801737   13619 main.go:141] libmachine: (addons-391283) DBG | Closing plugin on server side
	I0313 23:32:08.801775   13619 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:32:08.801793   13619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:32:08.847936   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:08.848121   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:08.949674   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:09.236408   13619 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.164066364s)
	I0313 23:32:09.236460   13619 main.go:141] libmachine: Making call to close driver server
	I0313 23:32:09.236474   13619 main.go:141] libmachine: (addons-391283) Calling .Close
	I0313 23:32:09.236798   13619 main.go:141] libmachine: (addons-391283) DBG | Closing plugin on server side
	I0313 23:32:09.236839   13619 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:32:09.236848   13619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:32:09.236869   13619 main.go:141] libmachine: Making call to close driver server
	I0313 23:32:09.236893   13619 main.go:141] libmachine: (addons-391283) Calling .Close
	I0313 23:32:09.237165   13619 main.go:141] libmachine: (addons-391283) DBG | Closing plugin on server side
	I0313 23:32:09.237210   13619 main.go:141] libmachine: Successfully made call to close driver server
	I0313 23:32:09.237222   13619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0313 23:32:09.238655   13619 addons.go:470] Verifying addon gcp-auth=true in "addons-391283"
	I0313 23:32:09.240192   13619 out.go:177] * Verifying gcp-auth addon...
	I0313 23:32:09.242285   13619 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0313 23:32:09.252232   13619 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0313 23:32:09.252254   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:09.334378   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:09.336487   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:09.447697   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:09.746095   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:09.828221   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:09.831322   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:09.949861   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:10.246193   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:10.330506   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:10.333889   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:10.444495   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:10.745775   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:10.828019   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:10.830441   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:10.949091   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:11.222069   13619 pod_ready.go:102] pod "coredns-5dd5756b68-5kr62" in "kube-system" namespace has status "Ready":"False"
	I0313 23:32:11.245763   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:11.327268   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:11.331046   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:11.444481   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:11.745638   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:11.834451   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:11.834517   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:11.945655   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:12.245375   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:12.328256   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:12.331681   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:12.446687   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:12.746379   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:12.831306   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:12.836702   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:12.945462   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:13.246220   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:13.328100   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:13.330841   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:13.444865   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:13.720794   13619 pod_ready.go:102] pod "coredns-5dd5756b68-5kr62" in "kube-system" namespace has status "Ready":"False"
	I0313 23:32:13.746019   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:13.827309   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:13.831152   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:13.945106   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:14.248065   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:14.328525   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:14.332155   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:14.445067   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:14.746204   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:14.827260   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:14.831024   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:14.944313   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:15.245652   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:15.328358   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:15.332006   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:15.445193   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:15.746239   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:15.828554   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:15.831834   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:15.944653   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:16.220537   13619 pod_ready.go:102] pod "coredns-5dd5756b68-5kr62" in "kube-system" namespace has status "Ready":"False"
	I0313 23:32:16.245656   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:16.328363   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:16.331162   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:16.451813   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:16.745523   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:16.832272   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:16.832682   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:16.946313   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:17.247737   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:17.329260   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:17.331522   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:17.445249   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:17.746318   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:17.828819   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:17.831524   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:17.946086   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:18.221119   13619 pod_ready.go:102] pod "coredns-5dd5756b68-5kr62" in "kube-system" namespace has status "Ready":"False"
	I0313 23:32:18.247738   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:18.328442   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:18.331738   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:18.446232   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:18.746298   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:18.827833   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:18.831239   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:18.945579   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:19.247011   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:19.329675   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:19.332479   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:19.447193   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:19.747358   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:19.829639   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:19.831580   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:19.947060   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:20.246425   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:20.329127   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:20.332257   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:20.446621   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:20.730550   13619 pod_ready.go:102] pod "coredns-5dd5756b68-5kr62" in "kube-system" namespace has status "Ready":"False"
	I0313 23:32:20.747052   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:20.828150   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:20.831174   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:20.945983   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:21.250755   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:21.341543   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:21.347246   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:21.447321   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:21.746462   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:21.828922   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:21.832161   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:21.945332   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:22.249917   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:22.340127   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:22.342670   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:22.445706   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:22.746334   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:22.841580   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:22.847728   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:22.946301   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:23.218907   13619 pod_ready.go:102] pod "coredns-5dd5756b68-5kr62" in "kube-system" namespace has status "Ready":"False"
	I0313 23:32:23.247958   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:23.331133   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:23.333873   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:23.446641   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:23.746501   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:23.827980   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:23.831093   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:23.945316   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:24.248973   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:24.327306   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:24.331324   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:24.446191   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:24.745773   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:24.828152   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:24.831156   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:24.948563   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:25.234258   13619 pod_ready.go:102] pod "coredns-5dd5756b68-5kr62" in "kube-system" namespace has status "Ready":"False"
	I0313 23:32:25.254109   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:25.334531   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:25.336199   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:25.445741   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:25.745636   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:25.828370   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:25.831604   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:25.944860   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:26.246898   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:26.328195   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:26.335071   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:26.444494   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:26.754724   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:26.828014   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:26.831335   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:26.945940   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:27.246316   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:27.327936   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:27.331598   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:27.447662   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:27.717651   13619 pod_ready.go:102] pod "coredns-5dd5756b68-5kr62" in "kube-system" namespace has status "Ready":"False"
	I0313 23:32:27.745430   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:27.828140   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:27.831793   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:27.945918   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:28.321870   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:28.328965   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:28.333600   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:28.448823   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:28.754034   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:28.828566   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:28.833132   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:28.945046   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:29.246563   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:29.333431   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:29.338099   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:29.458479   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:29.719676   13619 pod_ready.go:102] pod "coredns-5dd5756b68-5kr62" in "kube-system" namespace has status "Ready":"False"
	I0313 23:32:29.760288   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:29.842750   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:29.842910   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:29.944618   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:30.245774   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:30.327464   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:30.331923   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:30.445285   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:30.745497   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:30.829027   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:30.831976   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:30.944755   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:31.246194   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:31.328473   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:31.331266   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:31.446526   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:31.746527   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:31.828342   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:31.832321   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:31.945575   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:32.218796   13619 pod_ready.go:102] pod "coredns-5dd5756b68-5kr62" in "kube-system" namespace has status "Ready":"False"
	I0313 23:32:32.245614   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:32.332095   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:32.335999   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:32.446565   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:32.746066   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:32.832683   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:32.843272   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:32.945556   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:33.247526   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:33.331315   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:33.333774   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:33.446258   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:33.746709   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:33.829792   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:33.832544   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:33.946681   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:34.220643   13619 pod_ready.go:102] pod "coredns-5dd5756b68-5kr62" in "kube-system" namespace has status "Ready":"False"
	I0313 23:32:34.246540   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:34.329419   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:34.333743   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:34.448535   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:34.747421   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:34.827926   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:34.830730   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:34.947370   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:35.251381   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:35.328242   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:35.332329   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:35.445958   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:35.746662   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:35.828804   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:35.831086   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:35.945358   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:36.246478   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:36.330353   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:36.331980   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:36.468854   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:36.720743   13619 pod_ready.go:92] pod "coredns-5dd5756b68-5kr62" in "kube-system" namespace has status "Ready":"True"
	I0313 23:32:36.720768   13619 pod_ready.go:81] duration metric: took 32.509306169s for pod "coredns-5dd5756b68-5kr62" in "kube-system" namespace to be "Ready" ...
	I0313 23:32:36.720780   13619 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-lvw9b" in "kube-system" namespace to be "Ready" ...
	I0313 23:32:36.723415   13619 pod_ready.go:97] error getting pod "coredns-5dd5756b68-lvw9b" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-lvw9b" not found
	I0313 23:32:36.723433   13619 pod_ready.go:81] duration metric: took 2.645928ms for pod "coredns-5dd5756b68-lvw9b" in "kube-system" namespace to be "Ready" ...
	E0313 23:32:36.723442   13619 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-lvw9b" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-lvw9b" not found
	I0313 23:32:36.723448   13619 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-391283" in "kube-system" namespace to be "Ready" ...
	I0313 23:32:36.729228   13619 pod_ready.go:92] pod "etcd-addons-391283" in "kube-system" namespace has status "Ready":"True"
	I0313 23:32:36.729247   13619 pod_ready.go:81] duration metric: took 5.792532ms for pod "etcd-addons-391283" in "kube-system" namespace to be "Ready" ...
	I0313 23:32:36.729258   13619 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-391283" in "kube-system" namespace to be "Ready" ...
	I0313 23:32:36.735578   13619 pod_ready.go:92] pod "kube-apiserver-addons-391283" in "kube-system" namespace has status "Ready":"True"
	I0313 23:32:36.735592   13619 pod_ready.go:81] duration metric: took 6.327694ms for pod "kube-apiserver-addons-391283" in "kube-system" namespace to be "Ready" ...
	I0313 23:32:36.735598   13619 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-391283" in "kube-system" namespace to be "Ready" ...
	I0313 23:32:36.746420   13619 pod_ready.go:92] pod "kube-controller-manager-addons-391283" in "kube-system" namespace has status "Ready":"True"
	I0313 23:32:36.746435   13619 pod_ready.go:81] duration metric: took 10.830425ms for pod "kube-controller-manager-addons-391283" in "kube-system" namespace to be "Ready" ...
	I0313 23:32:36.746443   13619 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-thbzv" in "kube-system" namespace to be "Ready" ...
	I0313 23:32:36.748345   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:36.828852   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:36.832536   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:36.917781   13619 pod_ready.go:92] pod "kube-proxy-thbzv" in "kube-system" namespace has status "Ready":"True"
	I0313 23:32:36.917802   13619 pod_ready.go:81] duration metric: took 171.354165ms for pod "kube-proxy-thbzv" in "kube-system" namespace to be "Ready" ...
	I0313 23:32:36.917811   13619 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-391283" in "kube-system" namespace to be "Ready" ...
	I0313 23:32:36.945403   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:37.246772   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:37.315600   13619 pod_ready.go:92] pod "kube-scheduler-addons-391283" in "kube-system" namespace has status "Ready":"True"
	I0313 23:32:37.315620   13619 pod_ready.go:81] duration metric: took 397.802617ms for pod "kube-scheduler-addons-391283" in "kube-system" namespace to be "Ready" ...
	I0313 23:32:37.315629   13619 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-69cf46c98-qg6fd" in "kube-system" namespace to be "Ready" ...
	I0313 23:32:37.327960   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:37.331240   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:37.445535   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:37.716448   13619 pod_ready.go:92] pod "metrics-server-69cf46c98-qg6fd" in "kube-system" namespace has status "Ready":"True"
	I0313 23:32:37.716469   13619 pod_ready.go:81] duration metric: took 400.834701ms for pod "metrics-server-69cf46c98-qg6fd" in "kube-system" namespace to be "Ready" ...
	I0313 23:32:37.716478   13619 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-svvmq" in "kube-system" namespace to be "Ready" ...
	I0313 23:32:37.746464   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:37.827633   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:37.830836   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:37.944660   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:38.115865   13619 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-svvmq" in "kube-system" namespace has status "Ready":"True"
	I0313 23:32:38.115886   13619 pod_ready.go:81] duration metric: took 399.401787ms for pod "nvidia-device-plugin-daemonset-svvmq" in "kube-system" namespace to be "Ready" ...
	I0313 23:32:38.115905   13619 pod_ready.go:38] duration metric: took 33.987820101s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0313 23:32:38.115925   13619 api_server.go:52] waiting for apiserver process to appear ...
	I0313 23:32:38.115983   13619 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0313 23:32:38.141507   13619 api_server.go:72] duration metric: took 42.138844414s to wait for apiserver process to appear ...
	I0313 23:32:38.141530   13619 api_server.go:88] waiting for apiserver healthz status ...
	I0313 23:32:38.141548   13619 api_server.go:253] Checking apiserver healthz at https://192.168.39.216:8443/healthz ...
	I0313 23:32:38.146912   13619 api_server.go:279] https://192.168.39.216:8443/healthz returned 200:
	ok
	I0313 23:32:38.148210   13619 api_server.go:141] control plane version: v1.28.4
	I0313 23:32:38.148232   13619 api_server.go:131] duration metric: took 6.693997ms to wait for apiserver health ...
	I0313 23:32:38.148240   13619 system_pods.go:43] waiting for kube-system pods to appear ...
	I0313 23:32:38.246676   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:38.325460   13619 system_pods.go:59] 18 kube-system pods found
	I0313 23:32:38.325490   13619 system_pods.go:61] "coredns-5dd5756b68-5kr62" [a2a022a3-8333-4ec5-98fe-311d11c89ca8] Running
	I0313 23:32:38.325497   13619 system_pods.go:61] "csi-hostpath-attacher-0" [acf54164-eb26-4bdf-8c8a-7d0390109a5c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0313 23:32:38.325503   13619 system_pods.go:61] "csi-hostpath-resizer-0" [d5f6949d-a8ce-44ad-a171-b29154a78927] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0313 23:32:38.325510   13619 system_pods.go:61] "csi-hostpathplugin-f9xth" [81a4207b-7dd6-4c77-ad57-8a23abd7485c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0313 23:32:38.325514   13619 system_pods.go:61] "etcd-addons-391283" [5078f3d0-368c-4927-81b8-20763662b2e1] Running
	I0313 23:32:38.325519   13619 system_pods.go:61] "kube-apiserver-addons-391283" [416525b3-46e0-47b5-8f11-274a968793b0] Running
	I0313 23:32:38.325522   13619 system_pods.go:61] "kube-controller-manager-addons-391283" [90087b1e-1e3c-4b4b-978d-27085bb14347] Running
	I0313 23:32:38.325525   13619 system_pods.go:61] "kube-ingress-dns-minikube" [02bcbd7d-ccac-41b1-820b-8b63c18786df] Running
	I0313 23:32:38.325528   13619 system_pods.go:61] "kube-proxy-thbzv" [35ea23ae-0f8c-4919-ac90-415578b3c21d] Running
	I0313 23:32:38.325531   13619 system_pods.go:61] "kube-scheduler-addons-391283" [f9682206-8bed-4921-93e1-f4ca8c0bb444] Running
	I0313 23:32:38.325534   13619 system_pods.go:61] "metrics-server-69cf46c98-qg6fd" [0147464d-fb7a-4451-9f18-57a19ddb6e48] Running
	I0313 23:32:38.325537   13619 system_pods.go:61] "nvidia-device-plugin-daemonset-svvmq" [3e7ad75d-2aa7-4406-8854-6445d33ba8b0] Running
	I0313 23:32:38.325541   13619 system_pods.go:61] "registry-proxy-b4frv" [83d3e173-f718-464d-849f-67f34cc21b80] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0313 23:32:38.325549   13619 system_pods.go:61] "registry-v9w67" [687297a7-67fd-473c-b3ec-9cbbd110301d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0313 23:32:38.325552   13619 system_pods.go:61] "snapshot-controller-58dbcc7b99-766x6" [21837f14-f5f1-4bce-8230-28a19085f6dc] Running
	I0313 23:32:38.325562   13619 system_pods.go:61] "snapshot-controller-58dbcc7b99-7qzsp" [45ea2cee-0c5c-46b1-a234-1b3b418ba874] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0313 23:32:38.325565   13619 system_pods.go:61] "storage-provisioner" [e61a848a-dad9-43a2-a0ef-196ad5871e98] Running
	I0313 23:32:38.325568   13619 system_pods.go:61] "tiller-deploy-7b677967b9-llgf5" [5dcdb5f0-3426-4b9c-a87e-5fd13bffbc04] Running
	I0313 23:32:38.325574   13619 system_pods.go:74] duration metric: took 177.329056ms to wait for pod list to return data ...
	I0313 23:32:38.325590   13619 default_sa.go:34] waiting for default service account to be created ...
	I0313 23:32:38.327696   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:38.331203   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:38.445433   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:38.520921   13619 default_sa.go:45] found service account: "default"
	I0313 23:32:38.520945   13619 default_sa.go:55] duration metric: took 195.348754ms for default service account to be created ...
	I0313 23:32:38.520953   13619 system_pods.go:116] waiting for k8s-apps to be running ...
	I0313 23:32:38.724283   13619 system_pods.go:86] 18 kube-system pods found
	I0313 23:32:38.724312   13619 system_pods.go:89] "coredns-5dd5756b68-5kr62" [a2a022a3-8333-4ec5-98fe-311d11c89ca8] Running
	I0313 23:32:38.724320   13619 system_pods.go:89] "csi-hostpath-attacher-0" [acf54164-eb26-4bdf-8c8a-7d0390109a5c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0313 23:32:38.724327   13619 system_pods.go:89] "csi-hostpath-resizer-0" [d5f6949d-a8ce-44ad-a171-b29154a78927] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0313 23:32:38.724335   13619 system_pods.go:89] "csi-hostpathplugin-f9xth" [81a4207b-7dd6-4c77-ad57-8a23abd7485c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0313 23:32:38.724340   13619 system_pods.go:89] "etcd-addons-391283" [5078f3d0-368c-4927-81b8-20763662b2e1] Running
	I0313 23:32:38.724345   13619 system_pods.go:89] "kube-apiserver-addons-391283" [416525b3-46e0-47b5-8f11-274a968793b0] Running
	I0313 23:32:38.724349   13619 system_pods.go:89] "kube-controller-manager-addons-391283" [90087b1e-1e3c-4b4b-978d-27085bb14347] Running
	I0313 23:32:38.724354   13619 system_pods.go:89] "kube-ingress-dns-minikube" [02bcbd7d-ccac-41b1-820b-8b63c18786df] Running
	I0313 23:32:38.724358   13619 system_pods.go:89] "kube-proxy-thbzv" [35ea23ae-0f8c-4919-ac90-415578b3c21d] Running
	I0313 23:32:38.724361   13619 system_pods.go:89] "kube-scheduler-addons-391283" [f9682206-8bed-4921-93e1-f4ca8c0bb444] Running
	I0313 23:32:38.724365   13619 system_pods.go:89] "metrics-server-69cf46c98-qg6fd" [0147464d-fb7a-4451-9f18-57a19ddb6e48] Running
	I0313 23:32:38.724369   13619 system_pods.go:89] "nvidia-device-plugin-daemonset-svvmq" [3e7ad75d-2aa7-4406-8854-6445d33ba8b0] Running
	I0313 23:32:38.724375   13619 system_pods.go:89] "registry-proxy-b4frv" [83d3e173-f718-464d-849f-67f34cc21b80] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0313 23:32:38.724382   13619 system_pods.go:89] "registry-v9w67" [687297a7-67fd-473c-b3ec-9cbbd110301d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0313 23:32:38.724386   13619 system_pods.go:89] "snapshot-controller-58dbcc7b99-766x6" [21837f14-f5f1-4bce-8230-28a19085f6dc] Running
	I0313 23:32:38.724398   13619 system_pods.go:89] "snapshot-controller-58dbcc7b99-7qzsp" [45ea2cee-0c5c-46b1-a234-1b3b418ba874] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0313 23:32:38.724402   13619 system_pods.go:89] "storage-provisioner" [e61a848a-dad9-43a2-a0ef-196ad5871e98] Running
	I0313 23:32:38.724408   13619 system_pods.go:89] "tiller-deploy-7b677967b9-llgf5" [5dcdb5f0-3426-4b9c-a87e-5fd13bffbc04] Running
	I0313 23:32:38.724415   13619 system_pods.go:126] duration metric: took 203.457521ms to wait for k8s-apps to be running ...
	I0313 23:32:38.724430   13619 system_svc.go:44] waiting for kubelet service to be running ....
	I0313 23:32:38.724474   13619 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:32:38.746515   13619 system_svc.go:56] duration metric: took 22.078171ms WaitForService to wait for kubelet
	I0313 23:32:38.746542   13619 kubeadm.go:576] duration metric: took 42.743886988s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0313 23:32:38.746568   13619 node_conditions.go:102] verifying NodePressure condition ...
	I0313 23:32:38.747790   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:38.838207   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:38.838838   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:38.918014   13619 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0313 23:32:38.918038   13619 node_conditions.go:123] node cpu capacity is 2
	I0313 23:32:38.918050   13619 node_conditions.go:105] duration metric: took 171.47663ms to run NodePressure ...
	I0313 23:32:38.918062   13619 start.go:240] waiting for startup goroutines ...
	I0313 23:32:38.945296   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:39.247752   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:39.328586   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:39.331638   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:39.445623   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:39.746450   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:39.828399   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:39.831971   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:39.945467   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:40.249965   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:40.327316   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:40.331843   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:40.452782   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:40.746279   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:40.835233   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:40.835478   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:40.945871   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:41.245909   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:41.331572   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:41.334239   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:41.444803   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:41.746223   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:41.829352   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:41.834236   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:41.946177   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:42.246971   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:42.329232   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:42.331811   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:42.452994   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:42.746716   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:42.838994   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:42.840497   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:42.951978   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:43.247629   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:43.330545   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:43.335472   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:43.445488   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:43.746576   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:43.830035   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:43.835624   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:43.944997   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:44.247048   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:44.328203   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:44.331377   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:44.747107   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:44.748625   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:44.828303   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:44.831404   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:44.946263   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:45.246619   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:45.328490   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:45.332443   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:45.444995   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:45.747137   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:45.827477   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:45.831179   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:45.944697   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:46.246667   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:46.327853   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:46.331066   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:46.444916   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:46.745140   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:46.828142   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:46.831539   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:46.945559   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:47.247279   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:47.327651   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:47.331145   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:47.444649   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:47.748641   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:47.828355   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:47.832256   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:47.946491   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:48.247420   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:48.330095   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:48.331807   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:48.449681   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:48.746947   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:48.828801   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:48.832107   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:48.945631   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:49.247277   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:49.328709   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:49.332242   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:49.447488   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:49.747718   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:50.114660   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:50.114986   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:50.118571   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:50.246496   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:50.327708   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:50.333710   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:50.446490   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:50.746618   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:50.830333   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:50.832079   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:50.950407   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:51.246368   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:51.328897   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:51.337083   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:51.445762   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:51.747055   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:51.828286   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:51.832081   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:51.945631   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:52.248326   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:52.328847   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:52.332336   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:52.446564   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:52.746132   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:52.829769   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:52.832546   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:52.946608   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:53.246730   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:53.328728   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:53.331745   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:53.447349   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:53.746796   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:53.826752   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:53.831302   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:53.948431   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:54.246065   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:54.328373   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:54.331987   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:54.445461   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:54.746448   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:54.829737   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:54.832320   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:54.945013   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:55.247091   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:55.328099   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:55.331656   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:55.445800   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:55.745989   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:55.827952   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:55.832503   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:55.948248   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:56.246716   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:56.328202   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:56.330973   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:56.445379   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:56.746441   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:56.830355   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:56.842273   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:56.945657   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:57.246610   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:57.331282   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:57.334664   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:57.446031   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:57.746295   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:57.827544   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:57.830995   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:57.945406   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:58.247927   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:58.327706   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:58.332105   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:58.446238   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:58.746088   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:58.828926   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:58.835991   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:58.950261   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:59.248585   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:59.328918   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:59.331866   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:59.447121   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:32:59.745906   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:32:59.828487   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:32:59.831776   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:32:59.948267   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:00.247151   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:00.328521   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:33:00.331313   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:00.447599   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:00.747331   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:00.828246   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0313 23:33:00.831429   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:00.954743   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:01.247745   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:01.328850   13619 kapi.go:107] duration metric: took 55.507216417s to wait for kubernetes.io/minikube-addons=registry ...
	I0313 23:33:01.332383   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:01.446863   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:01.746231   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:01.836672   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:02.179909   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:02.247683   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:02.332910   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:02.448152   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:02.746805   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:02.832862   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:02.946041   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:03.246622   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:03.332410   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:03.446066   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:03.746032   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:03.835122   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:03.945106   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:04.246578   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:04.332236   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:04.499329   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:04.747282   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:04.834952   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:04.947622   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:05.247916   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:05.332670   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:05.445317   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:05.746905   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:05.832807   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:05.950126   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:06.252227   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:06.332299   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:06.462527   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:06.747016   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:06.832982   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:06.945460   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:07.246711   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:07.332136   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:07.446249   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:07.747155   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:07.832903   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:07.947772   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:08.246773   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:08.332572   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:08.446698   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:08.746735   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:08.832049   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:09.236606   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:09.246612   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:09.331978   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:09.448870   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:09.928685   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:09.929174   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:09.948491   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:10.248530   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:10.333767   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:10.449830   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:10.747005   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:10.833353   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:10.955098   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:11.248705   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:11.332521   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:11.449640   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:11.746928   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:11.832184   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:11.947702   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:12.247137   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:12.332964   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:12.445019   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:12.746945   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:12.835733   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:12.945391   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:13.246506   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:13.331701   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:13.451160   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:13.748276   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:13.832185   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:13.949461   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:14.246446   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:14.333203   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:14.446001   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:14.747757   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:15.079984   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:15.080453   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:15.246320   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:15.332655   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:15.445054   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:15.746127   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:15.832410   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:15.945707   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:16.246636   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:16.331836   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:16.447487   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:16.745832   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:16.832312   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:16.945177   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:17.246332   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:17.364675   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:17.445294   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:17.746984   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:17.831902   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:17.945244   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:18.246493   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:18.333772   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:18.445687   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:18.746205   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:18.832183   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:18.944729   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:19.246821   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:19.333287   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:19.697208   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:19.746567   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:19.833138   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:19.946464   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:20.247573   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:20.332661   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:20.445810   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:20.745784   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:20.832310   13619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0313 23:33:20.953192   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:21.364807   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:21.370560   13619 kapi.go:107] duration metric: took 1m15.543250047s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0313 23:33:21.447350   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:21.750060   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:21.947208   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:22.246822   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:22.449048   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:22.745988   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:23.024267   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:23.246642   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:23.445405   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:23.745544   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:23.944885   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:24.246123   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:24.448549   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:24.745921   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:24.944831   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:25.245840   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:25.445887   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:25.746472   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:25.946247   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:26.247660   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:26.446058   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:26.746006   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:26.956932   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:27.245890   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:27.446192   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:27.745790   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:27.944836   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0313 23:33:28.245880   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:28.445566   13619 kapi.go:107] duration metric: took 1m21.506305821s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0313 23:33:28.746586   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:29.246517   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:29.745909   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:30.246861   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:30.746457   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:31.247151   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:31.745900   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:32.247017   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:32.746969   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:33.247074   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:33.747333   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:34.246393   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:34.747373   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:35.247385   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:35.747355   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:36.246890   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:36.747853   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:37.251932   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:38.101769   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:38.246101   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:38.748647   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:39.246759   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:39.746655   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:40.246801   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:40.746405   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:41.246604   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:41.747050   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:42.246348   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:42.746499   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:43.246386   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:43.746206   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:44.245908   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:44.747007   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:45.247170   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:45.748101   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:46.247811   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:46.746619   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:47.246336   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:47.746805   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:48.247688   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:48.746259   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:49.249300   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:49.746845   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:50.247208   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:50.747509   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:51.246289   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:51.745916   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:52.246791   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:52.747162   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:53.248519   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:53.746486   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:54.246324   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:54.746261   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:55.246332   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:55.746463   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:56.247426   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:56.746586   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:57.246850   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:57.746853   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:58.251167   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:58.746509   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:59.246353   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:33:59.747749   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:00.246344   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:00.747266   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:01.246268   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:01.747104   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:02.246336   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:02.746814   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:03.246634   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:03.746896   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:04.247129   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:04.746037   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:05.247453   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:05.746235   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:06.246321   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:06.746548   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:07.247642   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:07.746552   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:08.246736   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:08.746464   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:09.246442   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:09.746147   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:10.245797   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:10.746353   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:11.246715   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:11.747168   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:12.246796   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:12.746988   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:13.248873   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:13.746872   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:14.247250   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:14.745989   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:15.247166   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:15.748648   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:16.246906   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:16.747091   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:17.247205   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:17.747048   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:18.247405   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:18.747069   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:19.247025   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:19.746940   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:20.246760   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:20.746778   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:21.249722   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:21.746829   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:22.246879   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:22.747373   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:23.246235   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:23.745792   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:24.246529   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:24.747278   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:25.246752   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:25.746636   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:26.247736   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:26.747913   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:27.246871   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:27.747985   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:28.247159   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:28.747927   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:29.247236   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:29.746157   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:30.246541   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:30.745743   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:31.247451   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:31.747198   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:32.245642   13619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0313 23:34:32.748201   13619 kapi.go:107] duration metric: took 2m23.505912203s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0313 23:34:32.749839   13619 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-391283 cluster.
	I0313 23:34:32.751156   13619 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0313 23:34:32.752362   13619 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0313 23:34:32.753700   13619 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, storage-provisioner, cloud-spanner, storage-provisioner-rancher, metrics-server, helm-tiller, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0313 23:34:32.754941   13619 addons.go:505] duration metric: took 2m36.752238656s for enable addons: enabled=[nvidia-device-plugin ingress-dns storage-provisioner cloud-spanner storage-provisioner-rancher metrics-server helm-tiller inspektor-gadget yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0313 23:34:32.754975   13619 start.go:245] waiting for cluster config update ...
	I0313 23:34:32.754990   13619 start.go:254] writing updated cluster config ...
	I0313 23:34:32.755266   13619 ssh_runner.go:195] Run: rm -f paused
	I0313 23:34:32.808781   13619 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0313 23:34:32.810881   13619 out.go:177] * Done! kubectl is now configured to use "addons-391283" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	26894a695ab5b       a416a98b71e22       4 seconds ago        Exited              helper-pod                               0                   f8721277e5d8a       helper-pod-delete-pvc-2211f1af-7d8e-41d4-9423-4028f6871ce2
	c6b9346e9945d       beae173ccac6a       4 seconds ago        Exited              registry-test                            0                   741b217480dbb       registry-test
	e06c76f5bd7e8       ba5dc23f65d4c       7 seconds ago        Exited              busybox                                  0                   e1a4634f2f9ed       test-local-path
	e6e41760a8958       fc1caf62c3016       20 seconds ago       Running             gcp-auth                                 0                   ca4e32664eb0a       gcp-auth-5f6b4f85fd-r8kqc
	ed36ab233b60f       81f48f8d24e42       44 seconds ago       Exited              gadget                                   3                   3144b58c0be86       gadget-s8zfb
	151eb5ce566f5       738351fd438f0       About a minute ago   Running             csi-snapshotter                          0                   4e400b549fab4       csi-hostpathplugin-f9xth
	9986ca98d9bcf       931dbfd16f87c       About a minute ago   Running             csi-provisioner                          0                   4e400b549fab4       csi-hostpathplugin-f9xth
	72553380a8a44       e899260153aed       About a minute ago   Running             liveness-probe                           0                   4e400b549fab4       csi-hostpathplugin-f9xth
	6503e633e720f       e255e073c508c       About a minute ago   Running             hostpath                                 0                   4e400b549fab4       csi-hostpathplugin-f9xth
	a174a22756189       88ef14a257f42       About a minute ago   Running             node-driver-registrar                    0                   4e400b549fab4       csi-hostpathplugin-f9xth
	6ca71432c7549       ffcc66479b5ba       About a minute ago   Running             controller                               0                   7b9ad818d05c1       ingress-nginx-controller-76dc478dd8-zg82m
	b0cdb6357da68       b29d748098e32       About a minute ago   Exited              patch                                    2                   b098be95d2b8b       ingress-nginx-admission-patch-k4mkn
	bfdef75a72ca6       19a639eda60f0       About a minute ago   Running             csi-resizer                              0                   927ec8d72e8bc       csi-hostpath-resizer-0
	481db43852f83       59cbb42146a37       About a minute ago   Running             csi-attacher                             0                   02a857cbc513d       csi-hostpath-attacher-0
	671121736b954       a1ed5895ba635       About a minute ago   Running             csi-external-health-monitor-controller   0                   4e400b549fab4       csi-hostpathplugin-f9xth
	a4534ff7408f4       b29d748098e32       About a minute ago   Exited              create                                   0                   ac1686c449d58       ingress-nginx-admission-create-4c9vx
	fbc810b458742       aa61ee9c70bc4       About a minute ago   Running             volume-snapshot-controller               0                   329159d9b904d       snapshot-controller-58dbcc7b99-7qzsp
	c4ae24ee0a306       e16d1e3a10667       2 minutes ago        Running             local-path-provisioner                   0                   ca167cc48e95e       local-path-provisioner-78b46b4d5c-8rvll
	fa5c935ef31d6       31de47c733c91       2 minutes ago        Running             yakd                                     0                   b2a8fc19eece6       yakd-dashboard-9947fc6bf-xqbcn
	563d900b98ff0       35eab485356b4       2 minutes ago        Running             cloud-spanner-emulator                   0                   68debdb4201bb       cloud-spanner-emulator-6548d5df46-tc5wj
	c71e29e333d69       aa61ee9c70bc4       2 minutes ago        Running             volume-snapshot-controller               0                   8bb27943c27d8       snapshot-controller-58dbcc7b99-766x6
	bd6d91881657a       b9a5a1927366a       2 minutes ago        Exited              metrics-server                           0                   f97c6a299ff11       metrics-server-69cf46c98-qg6fd
	2481b0a1ac652       1499ed4fbd0aa       2 minutes ago        Running             minikube-ingress-dns                     0                   9fe28b7f9d4a9       kube-ingress-dns-minikube
	32dd5202f3e57       3f39089e90831       2 minutes ago        Running             tiller                                   0                   45ef6b6f85dc6       tiller-deploy-7b677967b9-llgf5
	66dc89cdc90f3       6e38f40d628db       2 minutes ago        Running             storage-provisioner                      0                   51779c131ff38       storage-provisioner
	eb980a4d93cbf       ead0a4a53df89       2 minutes ago        Running             coredns                                  0                   02fa5a76027f2       coredns-5dd5756b68-5kr62
	12a699a5efe78       83f6cc407eed8       2 minutes ago        Running             kube-proxy                               0                   95b7fdc627f59       kube-proxy-thbzv
	7932fe1318920       73deb9a3f7025       3 minutes ago        Running             etcd                                     0                   496eb7d662738       etcd-addons-391283
	170efaac822c5       e3db313c6dbc0       3 minutes ago        Running             kube-scheduler                           0                   7eb774f5bbaff       kube-scheduler-addons-391283
	bf0fc5d06f1a6       7fe0e6f37db33       3 minutes ago        Running             kube-apiserver                           0                   bd78cff0adf83       kube-apiserver-addons-391283
	b955980653d6d       d058aa5ab969c       3 minutes ago        Running             kube-controller-manager                  0                   1a05d581b118e       kube-controller-manager-addons-391283
	
	
	==> containerd <==
	Mar 13 23:34:50 addons-391283 containerd[650]: time="2024-03-13T23:34:50.679352690Z" level=info msg="StopPodSandbox for \"d7aa8493a8bfc63ea9ef4d2cf8b1ba73fa0fbb4e0eb60110594b97ce90d8ff96\""
	Mar 13 23:34:50 addons-391283 containerd[650]: time="2024-03-13T23:34:50.679437459Z" level=info msg="Container to stop \"7e31e44eebef9dc0bc397e2f9015e2c508e35ee5770cbb46a8e1acf910b34fdd\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Mar 13 23:34:50 addons-391283 containerd[650]: time="2024-03-13T23:34:50.748412547Z" level=info msg="shim disconnected" id=f8721277e5d8ac807cd659c09cfc9f13fb0cbddca8fa19a3176a6325245a42ef namespace=k8s.io
	Mar 13 23:34:50 addons-391283 containerd[650]: time="2024-03-13T23:34:50.752178631Z" level=warning msg="cleaning up after shim disconnected" id=f8721277e5d8ac807cd659c09cfc9f13fb0cbddca8fa19a3176a6325245a42ef namespace=k8s.io
	Mar 13 23:34:50 addons-391283 containerd[650]: time="2024-03-13T23:34:50.753615627Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Mar 13 23:34:50 addons-391283 containerd[650]: time="2024-03-13T23:34:50.877978838Z" level=info msg="shim disconnected" id=d7aa8493a8bfc63ea9ef4d2cf8b1ba73fa0fbb4e0eb60110594b97ce90d8ff96 namespace=k8s.io
	Mar 13 23:34:50 addons-391283 containerd[650]: time="2024-03-13T23:34:50.878149592Z" level=warning msg="cleaning up after shim disconnected" id=d7aa8493a8bfc63ea9ef4d2cf8b1ba73fa0fbb4e0eb60110594b97ce90d8ff96 namespace=k8s.io
	Mar 13 23:34:50 addons-391283 containerd[650]: time="2024-03-13T23:34:50.878176742Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Mar 13 23:34:50 addons-391283 containerd[650]: time="2024-03-13T23:34:50.902683140Z" level=info msg="TearDown network for sandbox \"52ebd3d58807ae299f5649bc75b992699c4a37ff505cfb1fa9edc04be3d6dcfc\" successfully"
	Mar 13 23:34:50 addons-391283 containerd[650]: time="2024-03-13T23:34:50.902744620Z" level=info msg="StopPodSandbox for \"52ebd3d58807ae299f5649bc75b992699c4a37ff505cfb1fa9edc04be3d6dcfc\" returns successfully"
	Mar 13 23:34:51 addons-391283 containerd[650]: time="2024-03-13T23:34:51.176730087Z" level=info msg="TearDown network for sandbox \"f8721277e5d8ac807cd659c09cfc9f13fb0cbddca8fa19a3176a6325245a42ef\" successfully"
	Mar 13 23:34:51 addons-391283 containerd[650]: time="2024-03-13T23:34:51.176945595Z" level=info msg="StopPodSandbox for \"f8721277e5d8ac807cd659c09cfc9f13fb0cbddca8fa19a3176a6325245a42ef\" returns successfully"
	Mar 13 23:34:51 addons-391283 containerd[650]: time="2024-03-13T23:34:51.268844857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx,Uid:bb170354-ae83-4e4e-bdb6-5eed44490606,Namespace:default,Attempt:0,}"
	Mar 13 23:34:51 addons-391283 containerd[650]: time="2024-03-13T23:34:51.310853915Z" level=info msg="TearDown network for sandbox \"d7aa8493a8bfc63ea9ef4d2cf8b1ba73fa0fbb4e0eb60110594b97ce90d8ff96\" successfully"
	Mar 13 23:34:51 addons-391283 containerd[650]: time="2024-03-13T23:34:51.310952637Z" level=info msg="StopPodSandbox for \"d7aa8493a8bfc63ea9ef4d2cf8b1ba73fa0fbb4e0eb60110594b97ce90d8ff96\" returns successfully"
	Mar 13 23:34:51 addons-391283 containerd[650]: time="2024-03-13T23:34:51.474899831Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 13 23:34:51 addons-391283 containerd[650]: time="2024-03-13T23:34:51.475220399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 13 23:34:51 addons-391283 containerd[650]: time="2024-03-13T23:34:51.475408066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 13 23:34:51 addons-391283 containerd[650]: time="2024-03-13T23:34:51.475706423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 13 23:34:51 addons-391283 containerd[650]: time="2024-03-13T23:34:51.580866661Z" level=info msg="RemoveContainer for \"8f6bb9ae25a429f3106b63c4d8f0c87902af092872635608e83ae05f1e524e19\""
	Mar 13 23:34:51 addons-391283 containerd[650]: time="2024-03-13T23:34:51.604274475Z" level=info msg="RemoveContainer for \"8f6bb9ae25a429f3106b63c4d8f0c87902af092872635608e83ae05f1e524e19\" returns successfully"
	Mar 13 23:34:51 addons-391283 containerd[650]: time="2024-03-13T23:34:51.607187066Z" level=info msg="RemoveContainer for \"7e31e44eebef9dc0bc397e2f9015e2c508e35ee5770cbb46a8e1acf910b34fdd\""
	Mar 13 23:34:51 addons-391283 containerd[650]: time="2024-03-13T23:34:51.633316330Z" level=info msg="RemoveContainer for \"7e31e44eebef9dc0bc397e2f9015e2c508e35ee5770cbb46a8e1acf910b34fdd\" returns successfully"
	Mar 13 23:34:51 addons-391283 containerd[650]: time="2024-03-13T23:34:51.633997117Z" level=error msg="ContainerStatus for \"7e31e44eebef9dc0bc397e2f9015e2c508e35ee5770cbb46a8e1acf910b34fdd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7e31e44eebef9dc0bc397e2f9015e2c508e35ee5770cbb46a8e1acf910b34fdd\": not found"
	Mar 13 23:34:51 addons-391283 containerd[650]: time="2024-03-13T23:34:51.698441961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx,Uid:bb170354-ae83-4e4e-bdb6-5eed44490606,Namespace:default,Attempt:0,} returns sandbox id \"5d5157f847daaa7ae2cd65cd5613332650507cc33782edef84abad802c5582c5\""
	
	
	==> coredns [eb980a4d93cbf695d47458b7599df5a01bf7a390515c5c599b29dcd59bf547a0] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:53876 - 8063 "HINFO IN 5758111218616873530.1637796044241374850. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00813699s
	[INFO] 10.244.0.22:56268 - 28835 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000558528s
	[INFO] 10.244.0.22:38961 - 34110 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000256963s
	[INFO] 10.244.0.22:37853 - 32329 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000158127s
	[INFO] 10.244.0.22:36270 - 6446 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000265132s
	[INFO] 10.244.0.22:51069 - 18352 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000654939s
	[INFO] 10.244.0.22:47495 - 57846 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00013051s
	[INFO] 10.244.0.22:52876 - 27183 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00060017s
	[INFO] 10.244.0.22:43471 - 39281 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 230 0.001309097s
	[INFO] 10.244.0.25:45375 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000236005s
	[INFO] 10.244.0.25:56057 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000126966s
	
	
	==> describe nodes <==
	Name:               addons-391283
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-391283
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe
	                    minikube.k8s.io/name=addons-391283
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_13T23_31_43_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-391283
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-391283"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 13 Mar 2024 23:31:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-391283
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 13 Mar 2024 23:34:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 13 Mar 2024 23:34:47 +0000   Wed, 13 Mar 2024 23:31:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 13 Mar 2024 23:34:47 +0000   Wed, 13 Mar 2024 23:31:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 13 Mar 2024 23:34:47 +0000   Wed, 13 Mar 2024 23:31:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 13 Mar 2024 23:34:47 +0000   Wed, 13 Mar 2024 23:31:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.216
	  Hostname:    addons-391283
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912784Ki
	  pods:               110
	System Info:
	  Machine ID:                 924e5da67d1c4ba39e62892d3b0798e3
	  System UUID:                924e5da6-7d1c-4ba3-9e62-892d3b0798e3
	  Boot ID:                    3490a337-32dc-464b-bc2e-8da8cbc095bf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.14
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (22 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6548d5df46-tc5wj      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m51s
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	  gadget                      gadget-s8zfb                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	  gcp-auth                    gcp-auth-5f6b4f85fd-r8kqc                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m43s
	  ingress-nginx               ingress-nginx-controller-76dc478dd8-zg82m    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         2m47s
	  kube-system                 coredns-5dd5756b68-5kr62                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m56s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 csi-hostpathplugin-f9xth                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 etcd-addons-391283                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3m9s
	  kube-system                 helm-test                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kube-apiserver-addons-391283                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m11s
	  kube-system                 kube-controller-manager-addons-391283        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m11s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m49s
	  kube-system                 kube-proxy-thbzv                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m57s
	  kube-system                 kube-scheduler-addons-391283                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m9s
	  kube-system                 snapshot-controller-58dbcc7b99-766x6         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m49s
	  kube-system                 snapshot-controller-58dbcc7b99-7qzsp         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m49s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m49s
	  kube-system                 tiller-deploy-7b677967b9-llgf5               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m50s
	  local-path-storage          local-path-provisioner-78b46b4d5c-8rvll      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m49s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-xqbcn               0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     2m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             388Mi (10%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m55s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  3m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m16s (x8 over 3m17s)  kubelet          Node addons-391283 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m16s (x8 over 3m17s)  kubelet          Node addons-391283 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m16s (x7 over 3m17s)  kubelet          Node addons-391283 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m9s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m9s                   kubelet          Node addons-391283 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m9s                   kubelet          Node addons-391283 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m9s                   kubelet          Node addons-391283 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m9s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m9s                   kubelet          Node addons-391283 status is now: NodeReady
	  Normal  RegisteredNode           2m57s                  node-controller  Node addons-391283 event: Registered Node addons-391283 in Controller
	
	
	==> dmesg <==
	[  +0.261865] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +5.454823] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.063687] kauditd_printk_skb: 158 callbacks suppressed
	[  +0.473834] systemd-fstab-generator[691]: Ignoring "noauto" option for root device
	[  +4.271930] systemd-fstab-generator[860]: Ignoring "noauto" option for root device
	[  +0.356837] kauditd_printk_skb: 74 callbacks suppressed
	[  +6.909549] systemd-fstab-generator[1236]: Ignoring "noauto" option for root device
	[  +0.069789] kauditd_printk_skb: 41 callbacks suppressed
	[ +13.307653] systemd-fstab-generator[1435]: Ignoring "noauto" option for root device
	[  +0.126482] kauditd_printk_skb: 21 callbacks suppressed
	[Mar13 23:32] kauditd_printk_skb: 83 callbacks suppressed
	[  +5.056482] kauditd_printk_skb: 112 callbacks suppressed
	[  +8.120490] kauditd_printk_skb: 96 callbacks suppressed
	[ +22.008764] kauditd_printk_skb: 9 callbacks suppressed
	[ +19.466546] kauditd_printk_skb: 2 callbacks suppressed
	[Mar13 23:33] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.734046] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.103111] kauditd_printk_skb: 63 callbacks suppressed
	[  +5.635213] kauditd_printk_skb: 23 callbacks suppressed
	[  +6.037174] kauditd_printk_skb: 55 callbacks suppressed
	[Mar13 23:34] kauditd_printk_skb: 28 callbacks suppressed
	[ +18.202307] kauditd_printk_skb: 24 callbacks suppressed
	[ +11.657740] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.559872] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.221085] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [7932fe13189202ad8f3fb3f66833aa4de222db50d01a51ac8f09517e98847f33] <==
	{"level":"warn","ts":"2024-03-13T23:33:09.218757Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"289.762509ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81840"}
	{"level":"info","ts":"2024-03-13T23:33:09.219072Z","caller":"traceutil/trace.go:171","msg":"trace[876291720] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1062; }","duration":"290.088588ms","start":"2024-03-13T23:33:08.928974Z","end":"2024-03-13T23:33:09.219063Z","steps":["trace[876291720] 'agreement among raft nodes before linearized reading'  (duration: 289.663699ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-13T23:33:09.910403Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.92045ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10528"}
	{"level":"info","ts":"2024-03-13T23:33:09.910572Z","caller":"traceutil/trace.go:171","msg":"trace[1883033703] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1069; }","duration":"179.098797ms","start":"2024-03-13T23:33:09.731463Z","end":"2024-03-13T23:33:09.910561Z","steps":["trace[1883033703] 'range keys from in-memory index tree'  (duration: 178.824305ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-13T23:33:15.056219Z","caller":"traceutil/trace.go:171","msg":"trace[1631233419] linearizableReadLoop","detail":"{readStateIndex:1145; appliedIndex:1144; }","duration":"234.997048ms","start":"2024-03-13T23:33:14.821196Z","end":"2024-03-13T23:33:15.056193Z","steps":["trace[1631233419] 'read index received'  (duration: 234.743934ms)","trace[1631233419] 'applied index is now lower than readState.Index'  (duration: 252.302µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-13T23:33:15.056454Z","caller":"traceutil/trace.go:171","msg":"trace[828308891] transaction","detail":"{read_only:false; response_revision:1112; number_of_response:1; }","duration":"236.895892ms","start":"2024-03-13T23:33:14.819549Z","end":"2024-03-13T23:33:15.056445Z","steps":["trace[828308891] 'process raft request'  (duration: 236.433247ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-13T23:33:15.056749Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.176299ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81902"}
	{"level":"info","ts":"2024-03-13T23:33:15.059437Z","caller":"traceutil/trace.go:171","msg":"trace[2097847989] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1112; }","duration":"130.950064ms","start":"2024-03-13T23:33:14.928476Z","end":"2024-03-13T23:33:15.059427Z","steps":["trace[2097847989] 'agreement among raft nodes before linearized reading'  (duration: 128.054016ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-13T23:33:15.056817Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"235.739125ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14111"}
	{"level":"info","ts":"2024-03-13T23:33:15.059667Z","caller":"traceutil/trace.go:171","msg":"trace[124983244] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1112; }","duration":"238.590202ms","start":"2024-03-13T23:33:14.82107Z","end":"2024-03-13T23:33:15.05966Z","steps":["trace[124983244] 'agreement among raft nodes before linearized reading'  (duration: 235.705057ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-13T23:33:19.674208Z","caller":"traceutil/trace.go:171","msg":"trace[442168486] linearizableReadLoop","detail":"{readStateIndex:1174; appliedIndex:1173; }","duration":"245.554486ms","start":"2024-03-13T23:33:19.42864Z","end":"2024-03-13T23:33:19.674194Z","steps":["trace[442168486] 'read index received'  (duration: 245.180202ms)","trace[442168486] 'applied index is now lower than readState.Index'  (duration: 373.872µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-13T23:33:19.674495Z","caller":"traceutil/trace.go:171","msg":"trace[1873377273] transaction","detail":"{read_only:false; response_revision:1140; number_of_response:1; }","duration":"316.270146ms","start":"2024-03-13T23:33:19.358217Z","end":"2024-03-13T23:33:19.674487Z","steps":["trace[1873377273] 'process raft request'  (duration: 315.650262ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-13T23:33:19.674645Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-13T23:33:19.358193Z","time spent":"316.351684ms","remote":"127.0.0.1:59920","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1123 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-03-13T23:33:19.674942Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"246.338494ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81966"}
	{"level":"info","ts":"2024-03-13T23:33:19.674997Z","caller":"traceutil/trace.go:171","msg":"trace[194406873] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1140; }","duration":"246.398768ms","start":"2024-03-13T23:33:19.428591Z","end":"2024-03-13T23:33:19.67499Z","steps":["trace[194406873] 'agreement among raft nodes before linearized reading'  (duration: 246.241143ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-13T23:33:19.675269Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.727823ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-13T23:33:19.675319Z","caller":"traceutil/trace.go:171","msg":"trace[2061261244] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1140; }","duration":"140.779838ms","start":"2024-03-13T23:33:19.534533Z","end":"2024-03-13T23:33:19.675312Z","steps":["trace[2061261244] 'agreement among raft nodes before linearized reading'  (duration: 140.713414ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-13T23:33:21.347905Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.058105ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10848"}
	{"level":"info","ts":"2024-03-13T23:33:21.347974Z","caller":"traceutil/trace.go:171","msg":"trace[1384645928] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1158; }","duration":"117.138115ms","start":"2024-03-13T23:33:21.230826Z","end":"2024-03-13T23:33:21.347964Z","steps":["trace[1384645928] 'range keys from in-memory index tree'  (duration: 116.975417ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-13T23:33:38.083364Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"315.570159ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-03-13T23:33:38.084171Z","caller":"traceutil/trace.go:171","msg":"trace[93850931] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1227; }","duration":"316.462658ms","start":"2024-03-13T23:33:37.767697Z","end":"2024-03-13T23:33:38.084159Z","steps":["trace[93850931] 'range keys from in-memory index tree'  (duration: 315.48451ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-13T23:33:38.084389Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-13T23:33:37.767684Z","time spent":"316.693766ms","remote":"127.0.0.1:59920","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1137,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-03-13T23:33:38.083503Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"352.499319ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10848"}
	{"level":"info","ts":"2024-03-13T23:33:38.084757Z","caller":"traceutil/trace.go:171","msg":"trace[1100434141] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1227; }","duration":"353.758887ms","start":"2024-03-13T23:33:37.730989Z","end":"2024-03-13T23:33:38.084748Z","steps":["trace[1100434141] 'range keys from in-memory index tree'  (duration: 352.419342ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-13T23:33:38.0849Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-13T23:33:37.730977Z","time spent":"353.91069ms","remote":"127.0.0.1:59936","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":10872,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	
	
	==> gcp-auth [e6e41760a89583082e29ba75b4556e53e15d52baad13802f4d0442609f71311d] <==
	2024/03/13 23:34:31 GCP Auth Webhook started!
	2024/03/13 23:34:33 Ready to marshal response ...
	2024/03/13 23:34:33 Ready to write response ...
	2024/03/13 23:34:33 Ready to marshal response ...
	2024/03/13 23:34:33 Ready to write response ...
	2024/03/13 23:34:43 Ready to marshal response ...
	2024/03/13 23:34:43 Ready to write response ...
	2024/03/13 23:34:44 Ready to marshal response ...
	2024/03/13 23:34:44 Ready to write response ...
	2024/03/13 23:34:47 Ready to marshal response ...
	2024/03/13 23:34:47 Ready to write response ...
	2024/03/13 23:34:50 Ready to marshal response ...
	2024/03/13 23:34:50 Ready to write response ...
	
	
	==> kernel <==
	 23:34:53 up 3 min,  0 users,  load average: 2.65, 1.36, 0.57
	Linux addons-391283 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [bf0fc5d06f1a63bce2b5722d1725049802f5a7cbf524956502575b126a63f3b3] <==
	I0313 23:32:05.088016       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0313 23:32:05.536950       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller" clusterIPs={"IPv4":"10.101.228.244"}
	I0313 23:32:05.587203       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller-admission" clusterIPs={"IPv4":"10.106.250.246"}
	I0313 23:32:05.641033       1 controller.go:624] quota admission added evaluator for: jobs.batch
	I0313 23:32:06.596755       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.98.32.191"}
	W0313 23:32:06.632883       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0313 23:32:06.658731       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I0313 23:32:06.822822       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.102.15.162"}
	W0313 23:32:07.578933       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0313 23:32:08.915954       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.106.155.152"}
	E0313 23:32:29.769623       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.90.60:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.90.60:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.90.60:443: connect: connection refused
	W0313 23:32:29.770292       1 handler_proxy.go:93] no RequestInfo found in the context
	E0313 23:32:29.770373       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0313 23:32:29.771047       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0313 23:32:29.771932       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.90.60:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.90.60:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.90.60:443: connect: connection refused
	E0313 23:32:29.776028       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.90.60:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.90.60:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.90.60:443: connect: connection refused
	I0313 23:32:29.880699       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0313 23:32:39.558736       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0313 23:33:39.558629       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0313 23:34:39.560051       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0313 23:34:50.781531       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0313 23:34:51.016064       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.224.229"}
	
	
	==> kube-controller-manager [b955980653d6d3ab1435d38c3c961e552f878d25a2739a665ab7586e368dc98e] <==
	I0313 23:33:18.581154       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0313 23:33:18.582524       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0313 23:33:20.086776       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0313 23:33:21.118050       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0313 23:33:21.186921       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="112.592µs"
	I0313 23:33:21.473606       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0313 23:33:22.116263       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0313 23:33:22.131934       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0313 23:33:22.154072       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0313 23:33:22.163350       1 event.go:307] "Event occurred" object="ingress-nginx/ingress-nginx-admission-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0313 23:33:22.163718       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0313 23:33:39.932345       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="18.307177ms"
	I0313 23:33:39.934485       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="59.428µs"
	I0313 23:33:48.021994       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0313 23:33:48.024915       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0313 23:33:48.071730       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0313 23:33:48.074224       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0313 23:34:32.430480       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-5f6b4f85fd" duration="16.510461ms"
	I0313 23:34:32.430607       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-5f6b4f85fd" duration="52.73µs"
	I0313 23:34:32.954464       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I0313 23:34:33.095920       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0313 23:34:39.991258       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0313 23:34:45.120673       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-69cf46c98" duration="9.451µs"
	I0313 23:34:48.460482       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-78b46b4d5c" duration="6.495µs"
	I0313 23:34:50.251706       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="10.248µs"
	
	
	==> kube-proxy [12a699a5efe78ba0a433fbbd7b5db8651c031c10799963f2f1e4a1e06feeb2e1] <==
	I0313 23:31:57.087143       1 server_others.go:69] "Using iptables proxy"
	I0313 23:31:57.105811       1 node.go:141] Successfully retrieved node IP: 192.168.39.216
	I0313 23:31:57.444427       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0313 23:31:57.444474       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0313 23:31:57.484914       1 server_others.go:152] "Using iptables Proxier"
	I0313 23:31:57.484980       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0313 23:31:57.486364       1 server.go:846] "Version info" version="v1.28.4"
	I0313 23:31:57.486403       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0313 23:31:57.497322       1 config.go:188] "Starting service config controller"
	I0313 23:31:57.497345       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0313 23:31:57.497371       1 config.go:97] "Starting endpoint slice config controller"
	I0313 23:31:57.497374       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0313 23:31:57.498028       1 config.go:315] "Starting node config controller"
	I0313 23:31:57.498038       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0313 23:31:57.598244       1 shared_informer.go:318] Caches are synced for node config
	I0313 23:31:57.598290       1 shared_informer.go:318] Caches are synced for service config
	I0313 23:31:57.598318       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [170efaac822c5c688235dbd2a74bf362d3828248d8285de08a5597973624f8dc] <==
	W0313 23:31:39.700975       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0313 23:31:39.700985       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0313 23:31:40.521778       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0313 23:31:40.521843       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0313 23:31:40.528259       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0313 23:31:40.528309       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0313 23:31:40.579441       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0313 23:31:40.579795       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0313 23:31:40.582298       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0313 23:31:40.582345       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0313 23:31:40.694977       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0313 23:31:40.695033       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0313 23:31:40.791176       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0313 23:31:40.791229       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0313 23:31:40.917351       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0313 23:31:40.917405       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0313 23:31:40.945331       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0313 23:31:40.945384       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0313 23:31:40.971533       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0313 23:31:40.971689       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0313 23:31:40.990426       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0313 23:31:40.990726       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0313 23:31:41.154590       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0313 23:31:41.155784       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0313 23:31:43.662815       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 13 23:34:51 addons-391283 kubelet[1243]: I0313 23:34:51.263384    1243 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/df047e7c-df6b-41bf-af5d-c31b1e1d7103-script\") pod \"df047e7c-df6b-41bf-af5d-c31b1e1d7103\" (UID: \"df047e7c-df6b-41bf-af5d-c31b1e1d7103\") "
	Mar 13 23:34:51 addons-391283 kubelet[1243]: I0313 23:34:51.263416    1243 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/df047e7c-df6b-41bf-af5d-c31b1e1d7103-data\") pod \"df047e7c-df6b-41bf-af5d-c31b1e1d7103\" (UID: \"df047e7c-df6b-41bf-af5d-c31b1e1d7103\") "
	Mar 13 23:34:51 addons-391283 kubelet[1243]: I0313 23:34:51.263677    1243 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df047e7c-df6b-41bf-af5d-c31b1e1d7103-data" (OuterVolumeSpecName: "data") pod "df047e7c-df6b-41bf-af5d-c31b1e1d7103" (UID: "df047e7c-df6b-41bf-af5d-c31b1e1d7103"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Mar 13 23:34:51 addons-391283 kubelet[1243]: I0313 23:34:51.264266    1243 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df047e7c-df6b-41bf-af5d-c31b1e1d7103-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "df047e7c-df6b-41bf-af5d-c31b1e1d7103" (UID: "df047e7c-df6b-41bf-af5d-c31b1e1d7103"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Mar 13 23:34:51 addons-391283 kubelet[1243]: I0313 23:34:51.265386    1243 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df047e7c-df6b-41bf-af5d-c31b1e1d7103-script" (OuterVolumeSpecName: "script") pod "df047e7c-df6b-41bf-af5d-c31b1e1d7103" (UID: "df047e7c-df6b-41bf-af5d-c31b1e1d7103"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Mar 13 23:34:51 addons-391283 kubelet[1243]: I0313 23:34:51.268210    1243 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df047e7c-df6b-41bf-af5d-c31b1e1d7103-kube-api-access-nwmb6" (OuterVolumeSpecName: "kube-api-access-nwmb6") pod "df047e7c-df6b-41bf-af5d-c31b1e1d7103" (UID: "df047e7c-df6b-41bf-af5d-c31b1e1d7103"). InnerVolumeSpecName "kube-api-access-nwmb6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 13 23:34:51 addons-391283 kubelet[1243]: I0313 23:34:51.368757    1243 reconciler_common.go:300] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/df047e7c-df6b-41bf-af5d-c31b1e1d7103-script\") on node \"addons-391283\" DevicePath \"\""
	Mar 13 23:34:51 addons-391283 kubelet[1243]: I0313 23:34:51.368869    1243 reconciler_common.go:300] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/df047e7c-df6b-41bf-af5d-c31b1e1d7103-data\") on node \"addons-391283\" DevicePath \"\""
	Mar 13 23:34:51 addons-391283 kubelet[1243]: I0313 23:34:51.368904    1243 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nwmb6\" (UniqueName: \"kubernetes.io/projected/df047e7c-df6b-41bf-af5d-c31b1e1d7103-kube-api-access-nwmb6\") on node \"addons-391283\" DevicePath \"\""
	Mar 13 23:34:51 addons-391283 kubelet[1243]: I0313 23:34:51.368928    1243 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/df047e7c-df6b-41bf-af5d-c31b1e1d7103-gcp-creds\") on node \"addons-391283\" DevicePath \"\""
	Mar 13 23:34:51 addons-391283 kubelet[1243]: I0313 23:34:51.469971    1243 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxjd\" (UniqueName: \"kubernetes.io/projected/83d3e173-f718-464d-849f-67f34cc21b80-kube-api-access-pcxjd\") pod \"83d3e173-f718-464d-849f-67f34cc21b80\" (UID: \"83d3e173-f718-464d-849f-67f34cc21b80\") "
	Mar 13 23:34:51 addons-391283 kubelet[1243]: I0313 23:34:51.472503    1243 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83d3e173-f718-464d-849f-67f34cc21b80-kube-api-access-pcxjd" (OuterVolumeSpecName: "kube-api-access-pcxjd") pod "83d3e173-f718-464d-849f-67f34cc21b80" (UID: "83d3e173-f718-464d-849f-67f34cc21b80"). InnerVolumeSpecName "kube-api-access-pcxjd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 13 23:34:51 addons-391283 kubelet[1243]: I0313 23:34:51.565171    1243 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8721277e5d8ac807cd659c09cfc9f13fb0cbddca8fa19a3176a6325245a42ef"
	Mar 13 23:34:51 addons-391283 kubelet[1243]: I0313 23:34:51.572381    1243 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-pcxjd\" (UniqueName: \"kubernetes.io/projected/83d3e173-f718-464d-849f-67f34cc21b80-kube-api-access-pcxjd\") on node \"addons-391283\" DevicePath \"\""
	Mar 13 23:34:51 addons-391283 kubelet[1243]: I0313 23:34:51.577580    1243 scope.go:117] "RemoveContainer" containerID="8f6bb9ae25a429f3106b63c4d8f0c87902af092872635608e83ae05f1e524e19"
	Mar 13 23:34:51 addons-391283 kubelet[1243]: I0313 23:34:51.604712    1243 scope.go:117] "RemoveContainer" containerID="7e31e44eebef9dc0bc397e2f9015e2c508e35ee5770cbb46a8e1acf910b34fdd"
	Mar 13 23:34:51 addons-391283 kubelet[1243]: I0313 23:34:51.633653    1243 scope.go:117] "RemoveContainer" containerID="7e31e44eebef9dc0bc397e2f9015e2c508e35ee5770cbb46a8e1acf910b34fdd"
	Mar 13 23:34:51 addons-391283 kubelet[1243]: E0313 23:34:51.634327    1243 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7e31e44eebef9dc0bc397e2f9015e2c508e35ee5770cbb46a8e1acf910b34fdd\": not found" containerID="7e31e44eebef9dc0bc397e2f9015e2c508e35ee5770cbb46a8e1acf910b34fdd"
	Mar 13 23:34:51 addons-391283 kubelet[1243]: I0313 23:34:51.634358    1243 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7e31e44eebef9dc0bc397e2f9015e2c508e35ee5770cbb46a8e1acf910b34fdd"} err="failed to get container status \"7e31e44eebef9dc0bc397e2f9015e2c508e35ee5770cbb46a8e1acf910b34fdd\": rpc error: code = NotFound desc = an error occurred when try to find container \"7e31e44eebef9dc0bc397e2f9015e2c508e35ee5770cbb46a8e1acf910b34fdd\": not found"
	Mar 13 23:34:52 addons-391283 kubelet[1243]: I0313 23:34:52.073727    1243 scope.go:117] "RemoveContainer" containerID="ed36ab233b60f8fce1fe911d0d97c7b3b72ecbac159db4c61c5f2059b592f06d"
	Mar 13 23:34:52 addons-391283 kubelet[1243]: I0313 23:34:52.620318    1243 kubelet_pods.go:906] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/helm-test" secret="" err="secret \"gcp-auth\" not found"
	Mar 13 23:34:52 addons-391283 kubelet[1243]: I0313 23:34:52.645226    1243 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/helm-test" podStartSLOduration=1.7142089440000001 podCreationTimestamp="2024-03-13 23:34:44 +0000 UTC" firstStartedPulling="2024-03-13 23:34:45.510069545 +0000 UTC m=+182.589363095" lastFinishedPulling="2024-03-13 23:34:52.441054137 +0000 UTC m=+189.520347706" observedRunningTime="2024-03-13 23:34:52.641372373 +0000 UTC m=+189.720665942" watchObservedRunningTime="2024-03-13 23:34:52.645193555 +0000 UTC m=+189.724487125"
	Mar 13 23:34:52 addons-391283 kubelet[1243]: E0313 23:34:52.677958    1243 remote_runtime.go:557] "Attach container from runtime service failed" err="rpc error: code = InvalidArgument desc = tty and stderr cannot both be true" containerID="70cf5674d00e5359ec5f7ed585272d149c5de57a9efd1883ff624cc22747e515"
	Mar 13 23:34:53 addons-391283 kubelet[1243]: I0313 23:34:53.077304    1243 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="687297a7-67fd-473c-b3ec-9cbbd110301d" path="/var/lib/kubelet/pods/687297a7-67fd-473c-b3ec-9cbbd110301d/volumes"
	Mar 13 23:34:53 addons-391283 kubelet[1243]: I0313 23:34:53.077916    1243 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="83d3e173-f718-464d-849f-67f34cc21b80" path="/var/lib/kubelet/pods/83d3e173-f718-464d-849f-67f34cc21b80/volumes"
	
	
	==> storage-provisioner [66dc89cdc90f311fc872123bb896e33982bb32a1a6296189c94584f0db335037] <==
	I0313 23:32:07.840489       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0313 23:32:07.895761       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0313 23:32:07.895800       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0313 23:32:07.964022       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0313 23:32:07.964669       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-391283_f9897f0f-3d03-413e-93fa-1c3d09306add!
	I0313 23:32:07.965551       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"852be536-0f86-44a3-a4f6-24d89558cf76", APIVersion:"v1", ResourceVersion:"825", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-391283_f9897f0f-3d03-413e-93fa-1c3d09306add became leader
	I0313 23:32:08.065757       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-391283_f9897f0f-3d03-413e-93fa-1c3d09306add!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-391283 -n addons-391283
helpers_test.go:261: (dbg) Run:  kubectl --context addons-391283 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx ingress-nginx-admission-create-4c9vx ingress-nginx-admission-patch-k4mkn
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/InspektorGadget]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-391283 describe pod nginx ingress-nginx-admission-create-4c9vx ingress-nginx-admission-patch-k4mkn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-391283 describe pod nginx ingress-nginx-admission-create-4c9vx ingress-nginx-admission-patch-k4mkn: exit status 1 (69.7671ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-391283/192.168.39.216
	Start Time:       Wed, 13 Mar 2024 23:34:50 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b2vj2 (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-b2vj2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/nginx to addons-391283
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/nginx:alpine"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-4c9vx" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-k4mkn" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-391283 describe pod nginx ingress-nginx-admission-create-4c9vx ingress-nginx-admission-patch-k4mkn: exit status 1
--- FAIL: TestAddons/parallel/InspektorGadget (8.47s)

                                                
                                    

Test pass (293/333)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 68.65
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.28.4/json-events 58.15
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.07
18 TestDownloadOnly/v1.28.4/DeleteAll 0.14
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.29.0-rc.2/json-events 129.66
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.07
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.14
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.57
31 TestOffline 99.36
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 212.55
38 TestAddons/parallel/Registry 17.65
39 TestAddons/parallel/Ingress 24.16
41 TestAddons/parallel/MetricsServer 6.02
42 TestAddons/parallel/HelmTiller 19.92
44 TestAddons/parallel/CSI 58.44
45 TestAddons/parallel/Headlamp 17.24
46 TestAddons/parallel/CloudSpanner 6.85
47 TestAddons/parallel/LocalPath 58.58
48 TestAddons/parallel/NvidiaDevicePlugin 6.58
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.12
53 TestAddons/StoppedEnableDisable 92.7
54 TestCertOptions 49.95
55 TestCertExpiration 300.36
57 TestForceSystemdFlag 64.48
58 TestForceSystemdEnv 71.43
60 TestKVMDriverInstallOrUpdate 13.2
64 TestErrorSpam/setup 45.88
65 TestErrorSpam/start 0.36
66 TestErrorSpam/status 0.78
67 TestErrorSpam/pause 1.57
68 TestErrorSpam/unpause 1.69
69 TestErrorSpam/stop 4.8
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 99.14
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 45.49
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.07
80 TestFunctional/serial/CacheCmd/cache/add_remote 4.04
81 TestFunctional/serial/CacheCmd/cache/add_local 3.02
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.87
86 TestFunctional/serial/CacheCmd/cache/delete 0.12
87 TestFunctional/serial/MinikubeKubectlCmd 0.11
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
89 TestFunctional/serial/ExtraConfig 38.71
90 TestFunctional/serial/ComponentHealth 0.06
91 TestFunctional/serial/LogsCmd 1.52
92 TestFunctional/serial/LogsFileCmd 1.54
93 TestFunctional/serial/InvalidService 4.41
95 TestFunctional/parallel/ConfigCmd 0.42
96 TestFunctional/parallel/DashboardCmd 30.9
97 TestFunctional/parallel/DryRun 0.28
98 TestFunctional/parallel/InternationalLanguage 0.14
99 TestFunctional/parallel/StatusCmd 0.92
103 TestFunctional/parallel/ServiceCmdConnect 10.56
104 TestFunctional/parallel/AddonsCmd 0.14
105 TestFunctional/parallel/PersistentVolumeClaim 55.17
107 TestFunctional/parallel/SSHCmd 0.43
108 TestFunctional/parallel/CpCmd 1.4
109 TestFunctional/parallel/MySQL 30.35
110 TestFunctional/parallel/FileSync 0.24
111 TestFunctional/parallel/CertSync 1.21
115 TestFunctional/parallel/NodeLabels 0.09
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.47
119 TestFunctional/parallel/License 0.8
120 TestFunctional/parallel/Version/short 0.05
121 TestFunctional/parallel/Version/components 0.57
131 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
132 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
133 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
134 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
135 TestFunctional/parallel/ImageCommands/ImageBuild 4.89
136 TestFunctional/parallel/ImageCommands/Setup 2.69
137 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.65
138 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.19
139 TestFunctional/parallel/ServiceCmd/DeployApp 7.29
140 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.6
141 TestFunctional/parallel/ServiceCmd/List 0.5
142 TestFunctional/parallel/ServiceCmd/JSONOutput 0.48
143 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
144 TestFunctional/parallel/ServiceCmd/Format 0.35
145 TestFunctional/parallel/ServiceCmd/URL 0.38
146 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.7
148 TestFunctional/parallel/ProfileCmd/profile_list 0.38
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
150 TestFunctional/parallel/MountCmd/any-port 18.82
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.33
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.3
154 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
155 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
156 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
157 TestFunctional/parallel/MountCmd/specific-port 1.92
158 TestFunctional/parallel/MountCmd/VerifyCleanup 1.56
159 TestFunctional/delete_addon-resizer_images 0.06
160 TestFunctional/delete_my-image_image 0.01
161 TestFunctional/delete_minikube_cached_images 0.01
165 TestMutliControlPlane/serial/StartCluster 306.06
166 TestMutliControlPlane/serial/DeployApp 7.16
167 TestMutliControlPlane/serial/PingHostFromPods 1.35
168 TestMutliControlPlane/serial/AddWorkerNode 52.36
169 TestMutliControlPlane/serial/NodeLabels 0.07
170 TestMutliControlPlane/serial/HAppyAfterClusterStart 0.58
171 TestMutliControlPlane/serial/CopyFile 13.45
172 TestMutliControlPlane/serial/StopSecondaryNode 92.4
173 TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.4
174 TestMutliControlPlane/serial/RestartSecondaryNode 43.64
175 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.55
176 TestMutliControlPlane/serial/RestartClusterKeepsNodes 477.46
177 TestMutliControlPlane/serial/DeleteSecondaryNode 8.36
178 TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.4
179 TestMutliControlPlane/serial/StopCluster 276.48
180 TestMutliControlPlane/serial/RestartCluster 160.71
181 TestMutliControlPlane/serial/DegradedAfterClusterRestart 0.38
182 TestMutliControlPlane/serial/AddSecondaryNode 104.29
183 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.57
187 TestJSONOutput/start/Command 101.96
188 TestJSONOutput/start/Audit 0
190 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/pause/Command 0.72
194 TestJSONOutput/pause/Audit 0
196 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/unpause/Command 0.63
200 TestJSONOutput/unpause/Audit 0
202 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/stop/Command 7.33
206 TestJSONOutput/stop/Audit 0
208 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
210 TestErrorJSONOutput 0.21
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 95.21
219 TestMountStart/serial/StartWithMountFirst 34.28
220 TestMountStart/serial/VerifyMountFirst 0.39
221 TestMountStart/serial/StartWithMountSecond 31.28
222 TestMountStart/serial/VerifyMountSecond 0.37
223 TestMountStart/serial/DeleteFirst 0.88
224 TestMountStart/serial/VerifyMountPostDelete 0.39
225 TestMountStart/serial/Stop 1.69
226 TestMountStart/serial/RestartStopped 24.74
227 TestMountStart/serial/VerifyMountPostStop 0.39
230 TestMultiNode/serial/FreshStart2Nodes 110.86
231 TestMultiNode/serial/DeployApp2Nodes 6.69
232 TestMultiNode/serial/PingHostFrom2Pods 0.91
233 TestMultiNode/serial/AddNode 49.18
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.23
236 TestMultiNode/serial/CopyFile 7.44
237 TestMultiNode/serial/StopNode 2.31
238 TestMultiNode/serial/StartAfterStop 26.72
239 TestMultiNode/serial/RestartKeepsNodes 296.85
240 TestMultiNode/serial/DeleteNode 2.33
241 TestMultiNode/serial/StopMultiNode 184.1
242 TestMultiNode/serial/RestartMultiNode 80.84
243 TestMultiNode/serial/ValidateNameConflict 47.48
248 TestPreload 349.1
250 TestScheduledStopUnix 119.73
254 TestRunningBinaryUpgrade 192.33
256 TestKubernetesUpgrade 257.48
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
263 TestNoKubernetes/serial/StartWithK8s 96.23
268 TestNetworkPlugins/group/false 3.23
272 TestStoppedBinaryUpgrade/Setup 3.36
273 TestStoppedBinaryUpgrade/Upgrade 213.3
274 TestNoKubernetes/serial/StartWithStopK8s 16.97
275 TestNoKubernetes/serial/Start 70.2
276 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
277 TestNoKubernetes/serial/ProfileList 6.59
278 TestNoKubernetes/serial/Stop 1.48
279 TestNoKubernetes/serial/StartNoArgs 57.11
280 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
289 TestPause/serial/Start 101.21
290 TestStoppedBinaryUpgrade/MinikubeLogs 1.16
291 TestNetworkPlugins/group/auto/Start 91.37
292 TestPause/serial/SecondStartNoReconfiguration 72.54
293 TestNetworkPlugins/group/flannel/Start 113.28
294 TestNetworkPlugins/group/auto/KubeletFlags 0.24
295 TestNetworkPlugins/group/auto/NetCatPod 9.25
296 TestPause/serial/Pause 0.93
297 TestPause/serial/VerifyStatus 0.3
298 TestPause/serial/Unpause 0.81
299 TestPause/serial/PauseAgain 0.91
300 TestPause/serial/DeletePaused 1.19
301 TestPause/serial/VerifyDeletedResources 0.83
302 TestNetworkPlugins/group/enable-default-cni/Start 68.21
303 TestNetworkPlugins/group/auto/DNS 0.17
304 TestNetworkPlugins/group/auto/Localhost 0.17
305 TestNetworkPlugins/group/auto/HairPin 0.17
306 TestNetworkPlugins/group/bridge/Start 113.64
307 TestNetworkPlugins/group/flannel/ControllerPod 6.01
308 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
309 TestNetworkPlugins/group/flannel/NetCatPod 9.23
310 TestNetworkPlugins/group/flannel/DNS 0.18
311 TestNetworkPlugins/group/flannel/Localhost 0.15
312 TestNetworkPlugins/group/flannel/HairPin 0.15
313 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
314 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.34
315 TestNetworkPlugins/group/calico/Start 104.44
316 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
317 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
318 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
319 TestNetworkPlugins/group/kindnet/Start 76.85
320 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
321 TestNetworkPlugins/group/bridge/NetCatPod 13.32
322 TestNetworkPlugins/group/bridge/DNS 0.21
323 TestNetworkPlugins/group/bridge/Localhost 0.19
324 TestNetworkPlugins/group/bridge/HairPin 0.15
325 TestNetworkPlugins/group/custom-flannel/Start 95.54
327 TestStartStop/group/old-k8s-version/serial/FirstStart 214.35
328 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
329 TestNetworkPlugins/group/calico/ControllerPod 6.01
330 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
331 TestNetworkPlugins/group/kindnet/NetCatPod 9.21
332 TestNetworkPlugins/group/calico/KubeletFlags 0.23
333 TestNetworkPlugins/group/calico/NetCatPod 9.28
334 TestNetworkPlugins/group/kindnet/DNS 0.16
335 TestNetworkPlugins/group/kindnet/Localhost 0.13
336 TestNetworkPlugins/group/kindnet/HairPin 0.14
337 TestNetworkPlugins/group/calico/DNS 0.22
338 TestNetworkPlugins/group/calico/Localhost 0.21
339 TestNetworkPlugins/group/calico/HairPin 0.19
341 TestStartStop/group/no-preload/serial/FirstStart 223.94
343 TestStartStop/group/embed-certs/serial/FirstStart 139.7
344 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
345 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.27
346 TestNetworkPlugins/group/custom-flannel/DNS 0.23
347 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
348 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
350 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 99.71
351 TestStartStop/group/embed-certs/serial/DeployApp 10.3
352 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.23
353 TestStartStop/group/embed-certs/serial/Stop 92.53
354 TestStartStop/group/old-k8s-version/serial/DeployApp 10.48
355 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.31
356 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.06
357 TestStartStop/group/old-k8s-version/serial/Stop 92.5
358 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.16
359 TestStartStop/group/default-k8s-diff-port/serial/Stop 92.47
360 TestStartStop/group/no-preload/serial/DeployApp 12.29
361 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.05
362 TestStartStop/group/no-preload/serial/Stop 92.49
363 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
364 TestStartStop/group/embed-certs/serial/SecondStart 315.16
365 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.25
366 TestStartStop/group/old-k8s-version/serial/SecondStart 229.03
367 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
368 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 309.08
369 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.24
370 TestStartStop/group/no-preload/serial/SecondStart 317.43
371 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
372 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
373 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
374 TestStartStop/group/old-k8s-version/serial/Pause 2.68
376 TestStartStop/group/newest-cni/serial/FirstStart 61.87
377 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 20.01
378 TestStartStop/group/newest-cni/serial/DeployApp 0
379 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.19
380 TestStartStop/group/newest-cni/serial/Stop 3.33
381 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
382 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
383 TestStartStop/group/newest-cni/serial/SecondStart 38
384 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
385 TestStartStop/group/embed-certs/serial/Pause 2.92
386 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
387 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.07
388 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
389 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.7
390 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
391 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
392 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.33
393 TestStartStop/group/newest-cni/serial/Pause 2.72
394 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 14.01
395 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
396 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
397 TestStartStop/group/no-preload/serial/Pause 2.7
x
+
TestDownloadOnly/v1.20.0/json-events (68.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-092824 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-092824 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (1m8.65372415s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (68.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-092824
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-092824: exit status 85 (67.418757ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-092824 | jenkins | v1.32.0 | 13 Mar 24 23:26 UTC |          |
	|         | -p download-only-092824        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/13 23:26:41
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0313 23:26:41.828925   12358 out.go:291] Setting OutFile to fd 1 ...
	I0313 23:26:41.829153   12358 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:26:41.829162   12358 out.go:304] Setting ErrFile to fd 2...
	I0313 23:26:41.829167   12358 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:26:41.829363   12358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4922/.minikube/bin
	W0313 23:26:41.829469   12358 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18375-4922/.minikube/config/config.json: open /home/jenkins/minikube-integration/18375-4922/.minikube/config/config.json: no such file or directory
	I0313 23:26:41.829983   12358 out.go:298] Setting JSON to true
	I0313 23:26:41.830832   12358 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":546,"bootTime":1710371856,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0313 23:26:41.830886   12358 start.go:139] virtualization: kvm guest
	I0313 23:26:41.833276   12358 out.go:97] [download-only-092824] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0313 23:26:41.834876   12358 out.go:169] MINIKUBE_LOCATION=18375
	W0313 23:26:41.833372   12358 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18375-4922/.minikube/cache/preloaded-tarball: no such file or directory
	I0313 23:26:41.833413   12358 notify.go:220] Checking for updates...
	I0313 23:26:41.837732   12358 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0313 23:26:41.839301   12358 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18375-4922/kubeconfig
	I0313 23:26:41.840727   12358 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4922/.minikube
	I0313 23:26:41.842018   12358 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0313 23:26:41.844461   12358 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0313 23:26:41.844657   12358 driver.go:392] Setting default libvirt URI to qemu:///system
	I0313 23:26:41.938675   12358 out.go:97] Using the kvm2 driver based on user configuration
	I0313 23:26:41.938713   12358 start.go:297] selected driver: kvm2
	I0313 23:26:41.938719   12358 start.go:901] validating driver "kvm2" against <nil>
	I0313 23:26:41.939011   12358 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0313 23:26:41.939154   12358 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18375-4922/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0313 23:26:41.953062   12358 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0313 23:26:41.953120   12358 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0313 23:26:41.953590   12358 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0313 23:26:41.953762   12358 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0313 23:26:41.953829   12358 cni.go:84] Creating CNI manager for ""
	I0313 23:26:41.953846   12358 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0313 23:26:41.953858   12358 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0313 23:26:41.953924   12358 start.go:340] cluster config:
	{Name:download-only-092824 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-092824 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0313 23:26:41.954103   12358 iso.go:125] acquiring lock: {Name:mka186e9faf028141003d89f486cb5756102cb74 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0313 23:26:41.955845   12358 out.go:97] Downloading VM boot image ...
	I0313 23:26:41.955876   12358 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18375-4922/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso
	I0313 23:26:56.311272   12358 out.go:97] Starting "download-only-092824" primary control-plane node in "download-only-092824" cluster
	I0313 23:26:56.311294   12358 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0313 23:26:56.465155   12358 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0313 23:26:56.465191   12358 cache.go:56] Caching tarball of preloaded images
	I0313 23:26:56.465370   12358 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0313 23:26:56.467133   12358 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0313 23:26:56.467155   12358 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0313 23:26:56.623687   12358 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/18375-4922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0313 23:27:14.758866   12358 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0313 23:27:14.758955   12358 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18375-4922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0313 23:27:15.657652   12358 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0313 23:27:15.657992   12358 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/download-only-092824/config.json ...
	I0313 23:27:15.658022   12358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/download-only-092824/config.json: {Name:mk961e2f1c5ad11a0e3ea4f4561f64003501f018 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:27:15.658166   12358 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0313 23:27:15.658322   12358 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18375-4922/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-092824 host does not exist
	  To start a cluster, run: "minikube start -p download-only-092824"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-092824
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (58.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-816687 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-816687 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (58.149413281s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (58.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-816687
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-816687: exit status 85 (70.810919ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-092824 | jenkins | v1.32.0 | 13 Mar 24 23:26 UTC |                     |
	|         | -p download-only-092824        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 13 Mar 24 23:27 UTC | 13 Mar 24 23:27 UTC |
	| delete  | -p download-only-092824        | download-only-092824 | jenkins | v1.32.0 | 13 Mar 24 23:27 UTC | 13 Mar 24 23:27 UTC |
	| start   | -o=json --download-only        | download-only-816687 | jenkins | v1.32.0 | 13 Mar 24 23:27 UTC |                     |
	|         | -p download-only-816687        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/13 23:27:50
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0313 23:27:50.811900   12671 out.go:291] Setting OutFile to fd 1 ...
	I0313 23:27:50.812001   12671 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:27:50.812009   12671 out.go:304] Setting ErrFile to fd 2...
	I0313 23:27:50.812014   12671 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:27:50.812207   12671 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4922/.minikube/bin
	I0313 23:27:50.812734   12671 out.go:298] Setting JSON to true
	I0313 23:27:50.813506   12671 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":615,"bootTime":1710371856,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0313 23:27:50.813572   12671 start.go:139] virtualization: kvm guest
	I0313 23:27:50.815661   12671 out.go:97] [download-only-816687] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0313 23:27:50.817195   12671 out.go:169] MINIKUBE_LOCATION=18375
	I0313 23:27:50.815801   12671 notify.go:220] Checking for updates...
	I0313 23:27:50.818699   12671 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0313 23:27:50.820103   12671 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18375-4922/kubeconfig
	I0313 23:27:50.821337   12671 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4922/.minikube
	I0313 23:27:50.822496   12671 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0313 23:27:50.824714   12671 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0313 23:27:50.824915   12671 driver.go:392] Setting default libvirt URI to qemu:///system
	I0313 23:27:50.855976   12671 out.go:97] Using the kvm2 driver based on user configuration
	I0313 23:27:50.855995   12671 start.go:297] selected driver: kvm2
	I0313 23:27:50.856000   12671 start.go:901] validating driver "kvm2" against <nil>
	I0313 23:27:50.856288   12671 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0313 23:27:50.856354   12671 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18375-4922/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0313 23:27:50.870498   12671 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0313 23:27:50.870529   12671 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0313 23:27:50.870969   12671 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0313 23:27:50.871150   12671 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0313 23:27:50.871185   12671 cni.go:84] Creating CNI manager for ""
	I0313 23:27:50.871192   12671 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0313 23:27:50.871203   12671 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0313 23:27:50.871249   12671 start.go:340] cluster config:
	{Name:download-only-816687 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-816687 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0313 23:27:50.871330   12671 iso.go:125] acquiring lock: {Name:mka186e9faf028141003d89f486cb5756102cb74 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0313 23:27:50.872933   12671 out.go:97] Starting "download-only-816687" primary control-plane node in "download-only-816687" cluster
	I0313 23:27:50.872950   12671 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0313 23:27:51.625947   12671 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I0313 23:27:51.625977   12671 cache.go:56] Caching tarball of preloaded images
	I0313 23:27:51.626158   12671 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0313 23:27:51.627970   12671 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0313 23:27:51.627987   12671 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 ...
	I0313 23:27:51.781626   12671 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4?checksum=md5:36bbd14dd3f64efb2d3840dd67e48180 -> /home/jenkins/minikube-integration/18375-4922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I0313 23:28:07.009539   12671 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 ...
	I0313 23:28:07.009644   12671 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18375-4922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 ...
	I0313 23:28:07.875631   12671 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on containerd
	I0313 23:28:07.875989   12671 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/download-only-816687/config.json ...
	I0313 23:28:07.876021   12671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/download-only-816687/config.json: {Name:mkc33ead055e9054348b81e01dfff6709f389c61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:28:07.876195   12671 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0313 23:28:07.876362   12671 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18375-4922/.minikube/cache/linux/amd64/v1.28.4/kubectl
	
	
	* The control-plane node download-only-816687 host does not exist
	  To start a cluster, run: "minikube start -p download-only-816687"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-816687
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (129.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-717922 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-717922 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (2m9.65841015s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (129.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-717922
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-717922: exit status 85 (73.289525ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-092824 | jenkins | v1.32.0 | 13 Mar 24 23:26 UTC |                     |
	|         | -p download-only-092824           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 13 Mar 24 23:27 UTC | 13 Mar 24 23:27 UTC |
	| delete  | -p download-only-092824           | download-only-092824 | jenkins | v1.32.0 | 13 Mar 24 23:27 UTC | 13 Mar 24 23:27 UTC |
	| start   | -o=json --download-only           | download-only-816687 | jenkins | v1.32.0 | 13 Mar 24 23:27 UTC |                     |
	|         | -p download-only-816687           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 13 Mar 24 23:28 UTC | 13 Mar 24 23:28 UTC |
	| delete  | -p download-only-816687           | download-only-816687 | jenkins | v1.32.0 | 13 Mar 24 23:28 UTC | 13 Mar 24 23:28 UTC |
	| start   | -o=json --download-only           | download-only-717922 | jenkins | v1.32.0 | 13 Mar 24 23:28 UTC |                     |
	|         | -p download-only-717922           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/13 23:28:49
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0313 23:28:49.305865   12945 out.go:291] Setting OutFile to fd 1 ...
	I0313 23:28:49.305979   12945 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:28:49.305988   12945 out.go:304] Setting ErrFile to fd 2...
	I0313 23:28:49.305992   12945 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:28:49.306175   12945 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4922/.minikube/bin
	I0313 23:28:49.306737   12945 out.go:298] Setting JSON to true
	I0313 23:28:49.307603   12945 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":674,"bootTime":1710371856,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0313 23:28:49.307668   12945 start.go:139] virtualization: kvm guest
	I0313 23:28:49.310020   12945 out.go:97] [download-only-717922] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0313 23:28:49.311719   12945 out.go:169] MINIKUBE_LOCATION=18375
	I0313 23:28:49.310203   12945 notify.go:220] Checking for updates...
	I0313 23:28:49.314600   12945 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0313 23:28:49.316061   12945 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18375-4922/kubeconfig
	I0313 23:28:49.317341   12945 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4922/.minikube
	I0313 23:28:49.318702   12945 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0313 23:28:49.321016   12945 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0313 23:28:49.321313   12945 driver.go:392] Setting default libvirt URI to qemu:///system
	I0313 23:28:49.352440   12945 out.go:97] Using the kvm2 driver based on user configuration
	I0313 23:28:49.352476   12945 start.go:297] selected driver: kvm2
	I0313 23:28:49.352482   12945 start.go:901] validating driver "kvm2" against <nil>
	I0313 23:28:49.352824   12945 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0313 23:28:49.352899   12945 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18375-4922/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0313 23:28:49.367874   12945 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0313 23:28:49.367925   12945 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0313 23:28:49.368379   12945 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0313 23:28:49.368513   12945 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0313 23:28:49.368540   12945 cni.go:84] Creating CNI manager for ""
	I0313 23:28:49.368551   12945 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0313 23:28:49.368560   12945 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0313 23:28:49.368621   12945 start.go:340] cluster config:
	{Name:download-only-717922 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-717922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}

                                                
                                                
	I0313 23:28:49.368754   12945 iso.go:125] acquiring lock: {Name:mka186e9faf028141003d89f486cb5756102cb74 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0313 23:28:49.370102   12945 out.go:97] Starting "download-only-717922" primary control-plane node in "download-only-717922" cluster
	I0313 23:28:49.370117   12945 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0313 23:28:50.123261   12945 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I0313 23:28:50.123293   12945 cache.go:56] Caching tarball of preloaded images
	I0313 23:28:50.123439   12945 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0313 23:28:50.125242   12945 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0313 23:28:50.125256   12945 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0313 23:28:50.280553   12945 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:e143dbc3b8285cd3241a841ac2b6b7fc -> /home/jenkins/minikube-integration/18375-4922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I0313 23:29:08.888880   12945 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0313 23:29:08.888995   12945 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18375-4922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0313 23:29:09.645316   12945 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on containerd
	I0313 23:29:09.645662   12945 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/download-only-717922/config.json ...
	I0313 23:29:09.645693   12945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/download-only-717922/config.json: {Name:mkf3650eaf96e13ff29850837516078feba22aa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0313 23:29:09.645900   12945 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0313 23:29:09.646072   12945 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18375-4922/.minikube/cache/linux/amd64/v1.29.0-rc.2/kubectl
	
	
	* The control-plane node download-only-717922 host does not exist
	  To start a cluster, run: "minikube start -p download-only-717922"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-717922
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-719631 --alsologtostderr --binary-mirror http://127.0.0.1:37545 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-719631" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-719631
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (99.36s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-147572 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-147572 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m38.293958514s)
helpers_test.go:175: Cleaning up "offline-containerd-147572" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-147572
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-147572: (1.068272185s)
--- PASS: TestOffline (99.36s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-391283
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-391283: exit status 85 (63.71382ms)

                                                
                                                
-- stdout --
	* Profile "addons-391283" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-391283"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-391283
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-391283: exit status 85 (62.3429ms)

                                                
                                                
-- stdout --
	* Profile "addons-391283" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-391283"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (212.55s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-391283 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-391283 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m32.546169632s)
--- PASS: TestAddons/Setup (212.55s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 35.76052ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-v9w67" [687297a7-67fd-473c-b3ec-9cbbd110301d] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.006413261s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-b4frv" [83d3e173-f718-464d-849f-67f34cc21b80] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.006096733s
addons_test.go:340: (dbg) Run:  kubectl --context addons-391283 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-391283 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-391283 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.787181557s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-391283 ip
2024/03/13 23:34:49 [DEBUG] GET http://192.168.39.216:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-391283 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.65s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (24.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-391283 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-391283 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-391283 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [bb170354-ae83-4e4e-bdb6-5eed44490606] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [bb170354-ae83-4e4e-bdb6-5eed44490606] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.003976601s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-391283 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-391283 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-391283 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.216
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-391283 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-391283 addons disable ingress-dns --alsologtostderr -v=1: (1.920615901s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-391283 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-391283 addons disable ingress --alsologtostderr -v=1: (7.813555144s)
--- PASS: TestAddons/parallel/Ingress (24.16s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.02s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 3.856224ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-qg6fd" [0147464d-fb7a-4451-9f18-57a19ddb6e48] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009240477s
addons_test.go:415: (dbg) Run:  kubectl --context addons-391283 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-391283 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.02s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (19.92s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.47549ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-llgf5" [5dcdb5f0-3426-4b9c-a87e-5fd13bffbc04] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.005171001s
addons_test.go:473: (dbg) Run:  kubectl --context addons-391283 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-391283 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (10.026093735s)
addons_test.go:478: kubectl --context addons-391283 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:473: (dbg) Run:  kubectl --context addons-391283 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-391283 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (2.444221858s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-391283 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (19.92s)

                                                
                                    
x
+
TestAddons/parallel/CSI (58.44s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 14.374655ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-391283 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391283 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391283 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391283 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391283 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391283 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391283 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391283 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391283 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391283 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391283 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391283 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391283 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391283 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391283 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391283 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391283 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391283 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391283 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-391283 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [8f275029-8c8f-4889-b858-deaffd635e18] Pending
helpers_test.go:344: "task-pv-pod" [8f275029-8c8f-4889-b858-deaffd635e18] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [8f275029-8c8f-4889-b858-deaffd635e18] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.004524499s
addons_test.go:584: (dbg) Run:  kubectl --context addons-391283 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-391283 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-391283 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-391283 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-391283 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-391283 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391283 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391283 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391283 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391283 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391283 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391283 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391283 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391283 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391283 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-391283 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [2d65af7a-28ac-4bb8-ad4d-a3a7d4cb9743] Pending
helpers_test.go:344: "task-pv-pod-restore" [2d65af7a-28ac-4bb8-ad4d-a3a7d4cb9743] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [2d65af7a-28ac-4bb8-ad4d-a3a7d4cb9743] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.009459874s
addons_test.go:626: (dbg) Run:  kubectl --context addons-391283 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-391283 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-391283 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-391283 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-391283 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.828881623s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-391283 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (58.44s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-391283 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-391283 --alsologtostderr -v=1: (1.236967965s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5485c556b-qpxjk" [39c8f06f-e388-4ab2-8790-454d91406f7c] Pending
helpers_test.go:344: "headlamp-5485c556b-qpxjk" [39c8f06f-e388-4ab2-8790-454d91406f7c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5485c556b-qpxjk" [39c8f06f-e388-4ab2-8790-454d91406f7c] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.003970511s
--- PASS: TestAddons/parallel/Headlamp (17.24s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.85s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-tc5wj" [661a802c-4800-4f5d-be8e-22fcdf5da146] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.01370186s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-391283
--- PASS: TestAddons/parallel/CloudSpanner (6.85s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (58.58s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-391283 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-391283 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391283 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391283 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391283 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391283 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391283 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391283 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391283 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391283 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391283 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [455f2d4d-fb7a-4269-be53-d3c164d7a9ad] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [455f2d4d-fb7a-4269-be53-d3c164d7a9ad] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [455f2d4d-fb7a-4269-be53-d3c164d7a9ad] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.005136801s
addons_test.go:891: (dbg) Run:  kubectl --context addons-391283 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-391283 ssh "cat /opt/local-path-provisioner/pvc-2211f1af-7d8e-41d4-9423-4028f6871ce2_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-391283 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-391283 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-391283 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-391283 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.725018034s)
--- PASS: TestAddons/parallel/LocalPath (58.58s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.58s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-svvmq" [3e7ad75d-2aa7-4406-8854-6445d33ba8b0] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.007097412s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-391283
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.58s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-xqbcn" [1b80d418-6a5a-4169-9eaf-e24b38414a9f] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.007126436s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-391283 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-391283 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.7s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-391283
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-391283: (1m32.414332638s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-391283
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-391283
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-391283
--- PASS: TestAddons/StoppedEnableDisable (92.70s)

                                                
                                    
x
+
TestCertOptions (49.95s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-564063 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-564063 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (48.444429115s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-564063 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-564063 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-564063 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-564063" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-564063
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-564063: (1.047235144s)
--- PASS: TestCertOptions (49.95s)

                                                
                                    
x
+
TestCertExpiration (300.36s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-300904 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-300904 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m27.469837478s)
E0314 00:42:14.075912   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/functional-022391/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-300904 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-300904 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (31.76015626s)
helpers_test.go:175: Cleaning up "cert-expiration-300904" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-300904
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-300904: (1.127588378s)
--- PASS: TestCertExpiration (300.36s)

                                                
                                    
x
+
TestForceSystemdFlag (64.48s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-233601 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-233601 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m3.276271854s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-233601 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-233601" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-233601
--- PASS: TestForceSystemdFlag (64.48s)

                                                
                                    
x
+
TestForceSystemdEnv (71.43s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-188464 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-188464 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m10.208439213s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-188464 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-188464" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-188464
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-188464: (1.02091097s)
--- PASS: TestForceSystemdEnv (71.43s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (13.2s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (13.20s)

                                                
                                    
x
+
TestErrorSpam/setup (45.88s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-759345 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-759345 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-759345 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-759345 --driver=kvm2  --container-runtime=containerd: (45.88168544s)
--- PASS: TestErrorSpam/setup (45.88s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-759345 --log_dir /tmp/nospam-759345 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-759345 --log_dir /tmp/nospam-759345 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-759345 --log_dir /tmp/nospam-759345 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-759345 --log_dir /tmp/nospam-759345 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-759345 --log_dir /tmp/nospam-759345 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-759345 --log_dir /tmp/nospam-759345 status
--- PASS: TestErrorSpam/status (0.78s)

                                                
                                    
x
+
TestErrorSpam/pause (1.57s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-759345 --log_dir /tmp/nospam-759345 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-759345 --log_dir /tmp/nospam-759345 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-759345 --log_dir /tmp/nospam-759345 pause
--- PASS: TestErrorSpam/pause (1.57s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-759345 --log_dir /tmp/nospam-759345 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-759345 --log_dir /tmp/nospam-759345 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-759345 --log_dir /tmp/nospam-759345 unpause
--- PASS: TestErrorSpam/unpause (1.69s)

                                                
                                    
x
+
TestErrorSpam/stop (4.8s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-759345 --log_dir /tmp/nospam-759345 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-759345 --log_dir /tmp/nospam-759345 stop: (1.586578234s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-759345 --log_dir /tmp/nospam-759345 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-759345 --log_dir /tmp/nospam-759345 stop: (2.051068516s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-759345 --log_dir /tmp/nospam-759345 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-759345 --log_dir /tmp/nospam-759345 stop: (1.160225284s)
--- PASS: TestErrorSpam/stop (4.80s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18375-4922/.minikube/files/etc/test/nested/copy/12346/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (99.14s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-022391 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0313 23:39:32.820534   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt: no such file or directory
E0313 23:39:32.826383   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt: no such file or directory
E0313 23:39:32.836613   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt: no such file or directory
E0313 23:39:32.856887   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt: no such file or directory
E0313 23:39:32.897248   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt: no such file or directory
E0313 23:39:32.977628   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt: no such file or directory
E0313 23:39:33.138015   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt: no such file or directory
E0313 23:39:33.458755   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt: no such file or directory
E0313 23:39:34.099509   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt: no such file or directory
E0313 23:39:35.379743   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt: no such file or directory
E0313 23:39:37.940788   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt: no such file or directory
E0313 23:39:43.061872   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt: no such file or directory
E0313 23:39:53.302097   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt: no such file or directory
E0313 23:40:13.782364   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-022391 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m39.135832702s)
--- PASS: TestFunctional/serial/StartWithProxy (99.14s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (45.49s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-022391 --alsologtostderr -v=8
E0313 23:40:54.743114   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-022391 --alsologtostderr -v=8: (45.486655611s)
functional_test.go:659: soft start took 45.487164897s for "functional-022391" cluster.
--- PASS: TestFunctional/serial/SoftStart (45.49s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-022391 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-022391 cache add registry.k8s.io/pause:3.1: (1.322697595s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-022391 cache add registry.k8s.io/pause:3.3: (1.394338779s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-022391 cache add registry.k8s.io/pause:latest: (1.31816744s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (3.02s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-022391 /tmp/TestFunctionalserialCacheCmdcacheadd_local3865757598/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 cache add minikube-local-cache-test:functional-022391
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-022391 cache add minikube-local-cache-test:functional-022391: (2.656784004s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 cache delete minikube-local-cache-test:functional-022391
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-022391
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (3.02s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.87s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-022391 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (223.460555ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-022391 cache reload: (1.162997089s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.87s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 kubectl -- --context functional-022391 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-022391 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.71s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-022391 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-022391 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.707898129s)
functional_test.go:757: restart took 38.70798612s for "functional-022391" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.71s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-022391 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-022391 logs: (1.523856495s)
--- PASS: TestFunctional/serial/LogsCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 logs --file /tmp/TestFunctionalserialLogsFileCmd698793128/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-022391 logs --file /tmp/TestFunctionalserialLogsFileCmd698793128/001/logs.txt: (1.539571941s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.54s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.41s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-022391 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-022391
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-022391: exit status 115 (294.164508ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.159:31402 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-022391 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.41s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-022391 config get cpus: exit status 14 (67.431498ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-022391 config get cpus: exit status 14 (78.213047ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (30.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-022391 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-022391 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 20517: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (30.90s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-022391 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-022391 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (143.622032ms)

                                                
                                                
-- stdout --
	* [functional-022391] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18375-4922/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4922/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0313 23:42:40.796855   20388 out.go:291] Setting OutFile to fd 1 ...
	I0313 23:42:40.796975   20388 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:42:40.796985   20388 out.go:304] Setting ErrFile to fd 2...
	I0313 23:42:40.796989   20388 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:42:40.797225   20388 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4922/.minikube/bin
	I0313 23:42:40.797767   20388 out.go:298] Setting JSON to false
	I0313 23:42:40.798756   20388 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1505,"bootTime":1710371856,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0313 23:42:40.798830   20388 start.go:139] virtualization: kvm guest
	I0313 23:42:40.800884   20388 out.go:177] * [functional-022391] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0313 23:42:40.802553   20388 out.go:177]   - MINIKUBE_LOCATION=18375
	I0313 23:42:40.802556   20388 notify.go:220] Checking for updates...
	I0313 23:42:40.803760   20388 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0313 23:42:40.805010   20388 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18375-4922/kubeconfig
	I0313 23:42:40.806229   20388 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4922/.minikube
	I0313 23:42:40.807419   20388 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0313 23:42:40.808650   20388 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0313 23:42:40.810151   20388 config.go:182] Loaded profile config "functional-022391": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0313 23:42:40.810660   20388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:42:40.810710   20388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:42:40.825849   20388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41169
	I0313 23:42:40.826234   20388 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:42:40.826846   20388 main.go:141] libmachine: Using API Version  1
	I0313 23:42:40.826867   20388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:42:40.827241   20388 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:42:40.827424   20388 main.go:141] libmachine: (functional-022391) Calling .DriverName
	I0313 23:42:40.827665   20388 driver.go:392] Setting default libvirt URI to qemu:///system
	I0313 23:42:40.827933   20388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:42:40.827963   20388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:42:40.842664   20388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41209
	I0313 23:42:40.843065   20388 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:42:40.843604   20388 main.go:141] libmachine: Using API Version  1
	I0313 23:42:40.843632   20388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:42:40.843916   20388 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:42:40.844120   20388 main.go:141] libmachine: (functional-022391) Calling .DriverName
	I0313 23:42:40.875688   20388 out.go:177] * Using the kvm2 driver based on existing profile
	I0313 23:42:40.876939   20388 start.go:297] selected driver: kvm2
	I0313 23:42:40.876959   20388 start.go:901] validating driver "kvm2" against &{Name:functional-022391 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:functional-022391 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0313 23:42:40.877109   20388 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0313 23:42:40.879449   20388 out.go:177] 
	W0313 23:42:40.880677   20388 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0313 23:42:40.881951   20388 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-022391 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-022391 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-022391 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (141.842186ms)

                                                
                                                
-- stdout --
	* [functional-022391] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18375-4922/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4922/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0313 23:42:41.077126   20443 out.go:291] Setting OutFile to fd 1 ...
	I0313 23:42:41.077384   20443 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:42:41.077395   20443 out.go:304] Setting ErrFile to fd 2...
	I0313 23:42:41.077401   20443 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:42:41.077689   20443 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4922/.minikube/bin
	I0313 23:42:41.078188   20443 out.go:298] Setting JSON to false
	I0313 23:42:41.079101   20443 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1505,"bootTime":1710371856,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0313 23:42:41.079160   20443 start.go:139] virtualization: kvm guest
	I0313 23:42:41.081273   20443 out.go:177] * [functional-022391] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0313 23:42:41.082634   20443 out.go:177]   - MINIKUBE_LOCATION=18375
	I0313 23:42:41.082636   20443 notify.go:220] Checking for updates...
	I0313 23:42:41.083976   20443 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0313 23:42:41.085352   20443 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18375-4922/kubeconfig
	I0313 23:42:41.086627   20443 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4922/.minikube
	I0313 23:42:41.087921   20443 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0313 23:42:41.089036   20443 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0313 23:42:41.090541   20443 config.go:182] Loaded profile config "functional-022391": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0313 23:42:41.090910   20443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:42:41.090953   20443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:42:41.105559   20443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43717
	I0313 23:42:41.105929   20443 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:42:41.106482   20443 main.go:141] libmachine: Using API Version  1
	I0313 23:42:41.106500   20443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:42:41.106812   20443 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:42:41.106967   20443 main.go:141] libmachine: (functional-022391) Calling .DriverName
	I0313 23:42:41.107205   20443 driver.go:392] Setting default libvirt URI to qemu:///system
	I0313 23:42:41.107460   20443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:42:41.107491   20443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:42:41.122690   20443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36247
	I0313 23:42:41.123070   20443 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:42:41.123470   20443 main.go:141] libmachine: Using API Version  1
	I0313 23:42:41.123489   20443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:42:41.123750   20443 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:42:41.123896   20443 main.go:141] libmachine: (functional-022391) Calling .DriverName
	I0313 23:42:41.154948   20443 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0313 23:42:41.156414   20443 start.go:297] selected driver: kvm2
	I0313 23:42:41.156426   20443 start.go:901] validating driver "kvm2" against &{Name:functional-022391 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:functional-022391 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0313 23:42:41.156534   20443 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0313 23:42:41.158824   20443 out.go:177] 
	W0313 23:42:41.160143   20443 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0313 23:42:41.161521   20443 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-022391 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-022391 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-cwnmz" [66ab5d19-12f2-42e7-a4c7-85da1945ba24] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-cwnmz" [66ab5d19-12f2-42e7-a4c7-85da1945ba24] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004970484s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.159:31158
functional_test.go:1671: http://192.168.39.159:31158: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-cwnmz

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.159:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.159:31158
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.56s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (55.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8bcd04cd-0d6b-48fe-b7af-df945c26dea2] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005142584s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-022391 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-022391 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-022391 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-022391 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-022391 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-022391 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f7a0c88a-36ec-4030-a6ac-31046a72a8c7] Pending
helpers_test.go:344: "sp-pod" [f7a0c88a-36ec-4030-a6ac-31046a72a8c7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f7a0c88a-36ec-4030-a6ac-31046a72a8c7] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 29.005946084s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-022391 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-022391 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-022391 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0f91172e-d8b1-4fc0-8840-ea76d12341d0] Pending
helpers_test.go:344: "sp-pod" [0f91172e-d8b1-4fc0-8840-ea76d12341d0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0f91172e-d8b1-4fc0-8840-ea76d12341d0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.00519173s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-022391 exec sp-pod -- ls /tmp/mount
2024/03/13 23:43:11 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (55.17s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 ssh -n functional-022391 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 cp functional-022391:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1898032227/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 ssh -n functional-022391 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 ssh -n functional-022391 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (30.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-022391 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-d8mfp" [7b057b46-e110-4272-94ac-7925085d3b82] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-d8mfp" [7b057b46-e110-4272-94ac-7925085d3b82] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.005177163s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-022391 exec mysql-859648c796-d8mfp -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-022391 exec mysql-859648c796-d8mfp -- mysql -ppassword -e "show databases;": exit status 1 (294.375053ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-022391 exec mysql-859648c796-d8mfp -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-022391 exec mysql-859648c796-d8mfp -- mysql -ppassword -e "show databases;": exit status 1 (275.74613ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-022391 exec mysql-859648c796-d8mfp -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-022391 exec mysql-859648c796-d8mfp -- mysql -ppassword -e "show databases;": exit status 1 (462.111629ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-022391 exec mysql-859648c796-d8mfp -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (30.35s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/12346/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 ssh "sudo cat /etc/test/nested/copy/12346/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/12346.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 ssh "sudo cat /etc/ssl/certs/12346.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/12346.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 ssh "sudo cat /usr/share/ca-certificates/12346.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/123462.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 ssh "sudo cat /etc/ssl/certs/123462.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/123462.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 ssh "sudo cat /usr/share/ca-certificates/123462.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-022391 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-022391 ssh "sudo systemctl is-active docker": exit status 1 (238.352299ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-022391 ssh "sudo systemctl is-active crio": exit status 1 (232.573099ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-022391 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-022391
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-022391
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-022391 image ls --format short --alsologtostderr:
I0313 23:42:46.289543   20764 out.go:291] Setting OutFile to fd 1 ...
I0313 23:42:46.289852   20764 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0313 23:42:46.289862   20764 out.go:304] Setting ErrFile to fd 2...
I0313 23:42:46.289867   20764 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0313 23:42:46.290044   20764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4922/.minikube/bin
I0313 23:42:46.290544   20764 config.go:182] Loaded profile config "functional-022391": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0313 23:42:46.290641   20764 config.go:182] Loaded profile config "functional-022391": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0313 23:42:46.290987   20764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0313 23:42:46.291023   20764 main.go:141] libmachine: Launching plugin server for driver kvm2
I0313 23:42:46.305418   20764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46481
I0313 23:42:46.305837   20764 main.go:141] libmachine: () Calling .GetVersion
I0313 23:42:46.306367   20764 main.go:141] libmachine: Using API Version  1
I0313 23:42:46.306391   20764 main.go:141] libmachine: () Calling .SetConfigRaw
I0313 23:42:46.306703   20764 main.go:141] libmachine: () Calling .GetMachineName
I0313 23:42:46.306880   20764 main.go:141] libmachine: (functional-022391) Calling .GetState
I0313 23:42:46.308740   20764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0313 23:42:46.308773   20764 main.go:141] libmachine: Launching plugin server for driver kvm2
I0313 23:42:46.322445   20764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36713
I0313 23:42:46.322844   20764 main.go:141] libmachine: () Calling .GetVersion
I0313 23:42:46.323321   20764 main.go:141] libmachine: Using API Version  1
I0313 23:42:46.323371   20764 main.go:141] libmachine: () Calling .SetConfigRaw
I0313 23:42:46.323641   20764 main.go:141] libmachine: () Calling .GetMachineName
I0313 23:42:46.323798   20764 main.go:141] libmachine: (functional-022391) Calling .DriverName
I0313 23:42:46.323983   20764 ssh_runner.go:195] Run: systemctl --version
I0313 23:42:46.324000   20764 main.go:141] libmachine: (functional-022391) Calling .GetSSHHostname
I0313 23:42:46.326695   20764 main.go:141] libmachine: (functional-022391) DBG | domain functional-022391 has defined MAC address 52:54:00:1c:3a:e5 in network mk-functional-022391
I0313 23:42:46.327166   20764 main.go:141] libmachine: (functional-022391) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:3a:e5", ip: ""} in network mk-functional-022391: {Iface:virbr1 ExpiryTime:2024-03-14 00:39:08 +0000 UTC Type:0 Mac:52:54:00:1c:3a:e5 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:functional-022391 Clientid:01:52:54:00:1c:3a:e5}
I0313 23:42:46.327193   20764 main.go:141] libmachine: (functional-022391) DBG | domain functional-022391 has defined IP address 192.168.39.159 and MAC address 52:54:00:1c:3a:e5 in network mk-functional-022391
I0313 23:42:46.327353   20764 main.go:141] libmachine: (functional-022391) Calling .GetSSHPort
I0313 23:42:46.327495   20764 main.go:141] libmachine: (functional-022391) Calling .GetSSHKeyPath
I0313 23:42:46.327654   20764 main.go:141] libmachine: (functional-022391) Calling .GetSSHUsername
I0313 23:42:46.327778   20764 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4922/.minikube/machines/functional-022391/id_rsa Username:docker}
I0313 23:42:46.410084   20764 ssh_runner.go:195] Run: sudo crictl images --output json
I0313 23:42:46.457069   20764 main.go:141] libmachine: Making call to close driver server
I0313 23:42:46.457084   20764 main.go:141] libmachine: (functional-022391) Calling .Close
I0313 23:42:46.457400   20764 main.go:141] libmachine: Successfully made call to close driver server
I0313 23:42:46.457422   20764 main.go:141] libmachine: Making call to close connection to plugin binary
I0313 23:42:46.457433   20764 main.go:141] libmachine: Making call to close driver server
I0313 23:42:46.457447   20764 main.go:141] libmachine: (functional-022391) Calling .Close
I0313 23:42:46.457476   20764 main.go:141] libmachine: (functional-022391) DBG | Closing plugin on server side
I0313 23:42:46.457663   20764 main.go:141] libmachine: Successfully made call to close driver server
I0313 23:42:46.457676   20764 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-022391 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-022391  | sha256:ceb173 | 1.01kB |
| docker.io/library/mysql                     | 5.7                | sha256:510733 | 138MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:73deb9 | 103MB  |
| registry.k8s.io/kube-apiserver              | v1.28.4            | sha256:7fe0e6 | 34.7MB |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:c7d129 | 27.7MB |
| docker.io/library/nginx                     | latest             | sha256:92b11f | 70.5MB |
| gcr.io/google-containers/addon-resizer      | functional-022391  | sha256:ffd4cf | 10.8MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| localhost/my-image                          | functional-022391  | sha256:135217 | 775kB  |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/kube-proxy                  | v1.28.4            | sha256:83f6cc | 24.6MB |
| registry.k8s.io/kube-scheduler              | v1.28.4            | sha256:e3db31 | 18.8MB |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:ead0a4 | 16.2MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4            | sha256:d058aa | 33.4MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-022391 image ls --format table --alsologtostderr:
I0313 23:42:51.932293   20933 out.go:291] Setting OutFile to fd 1 ...
I0313 23:42:51.932817   20933 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0313 23:42:51.932873   20933 out.go:304] Setting ErrFile to fd 2...
I0313 23:42:51.932891   20933 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0313 23:42:51.933338   20933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4922/.minikube/bin
I0313 23:42:51.934367   20933 config.go:182] Loaded profile config "functional-022391": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0313 23:42:51.934474   20933 config.go:182] Loaded profile config "functional-022391": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0313 23:42:51.934890   20933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0313 23:42:51.934941   20933 main.go:141] libmachine: Launching plugin server for driver kvm2
I0313 23:42:51.949679   20933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35031
I0313 23:42:51.950130   20933 main.go:141] libmachine: () Calling .GetVersion
I0313 23:42:51.950660   20933 main.go:141] libmachine: Using API Version  1
I0313 23:42:51.950676   20933 main.go:141] libmachine: () Calling .SetConfigRaw
I0313 23:42:51.951008   20933 main.go:141] libmachine: () Calling .GetMachineName
I0313 23:42:51.951211   20933 main.go:141] libmachine: (functional-022391) Calling .GetState
I0313 23:42:51.953118   20933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0313 23:42:51.953152   20933 main.go:141] libmachine: Launching plugin server for driver kvm2
I0313 23:42:51.966972   20933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38183
I0313 23:42:51.967301   20933 main.go:141] libmachine: () Calling .GetVersion
I0313 23:42:51.967721   20933 main.go:141] libmachine: Using API Version  1
I0313 23:42:51.967743   20933 main.go:141] libmachine: () Calling .SetConfigRaw
I0313 23:42:51.968137   20933 main.go:141] libmachine: () Calling .GetMachineName
I0313 23:42:51.968349   20933 main.go:141] libmachine: (functional-022391) Calling .DriverName
I0313 23:42:51.968593   20933 ssh_runner.go:195] Run: systemctl --version
I0313 23:42:51.968622   20933 main.go:141] libmachine: (functional-022391) Calling .GetSSHHostname
I0313 23:42:51.971251   20933 main.go:141] libmachine: (functional-022391) DBG | domain functional-022391 has defined MAC address 52:54:00:1c:3a:e5 in network mk-functional-022391
I0313 23:42:51.971670   20933 main.go:141] libmachine: (functional-022391) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:3a:e5", ip: ""} in network mk-functional-022391: {Iface:virbr1 ExpiryTime:2024-03-14 00:39:08 +0000 UTC Type:0 Mac:52:54:00:1c:3a:e5 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:functional-022391 Clientid:01:52:54:00:1c:3a:e5}
I0313 23:42:51.971700   20933 main.go:141] libmachine: (functional-022391) DBG | domain functional-022391 has defined IP address 192.168.39.159 and MAC address 52:54:00:1c:3a:e5 in network mk-functional-022391
I0313 23:42:51.971806   20933 main.go:141] libmachine: (functional-022391) Calling .GetSSHPort
I0313 23:42:51.971941   20933 main.go:141] libmachine: (functional-022391) Calling .GetSSHKeyPath
I0313 23:42:51.972072   20933 main.go:141] libmachine: (functional-022391) Calling .GetSSHUsername
I0313 23:42:51.972222   20933 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4922/.minikube/machines/functional-022391/id_rsa Username:docker}
I0313 23:42:52.058982   20933 ssh_runner.go:195] Run: sudo crictl images --output json
I0313 23:42:52.096920   20933 main.go:141] libmachine: Making call to close driver server
I0313 23:42:52.096934   20933 main.go:141] libmachine: (functional-022391) Calling .Close
I0313 23:42:52.097196   20933 main.go:141] libmachine: Successfully made call to close driver server
I0313 23:42:52.097215   20933 main.go:141] libmachine: Making call to close connection to plugin binary
I0313 23:42:52.097225   20933 main.go:141] libmachine: Making call to close driver server
I0313 23:42:52.097229   20933 main.go:141] libmachine: (functional-022391) DBG | Closing plugin on server side
I0313 23:42:52.097233   20933 main.go:141] libmachine: (functional-022391) Calling .Close
I0313 23:42:52.097493   20933 main.go:141] libmachine: Successfully made call to close driver server
I0313 23:42:52.097546   20933 main.go:141] libmachine: Making call to close connection to plugin binary
I0313 23:42:52.097496   20933 main.go:141] libmachine: (functional-022391) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-022391 image ls --format json --alsologtostderr:
[{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e","repoDigests":["docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e"],"repoTags":["docker.io/library/nginx:latest"],"size":"70534964"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a3
02a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"16190758"},{"id":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"34683820"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io
/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"27737299"},{"id":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"33420443"},{"id":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"24581402"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226
c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:ceb173f86737749731ecf274f019b74b312a532b13cfcf38cbf2051ddfd7ef9b","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-022391"],"size":"1006"},{"id":"sha256:1352172ff218ed8e593c7a3324fa32c034bf5fc76995071ecd13e6b63e8cb914","repoDigests":[],"repoTags":["localhost/my-image:functional-022391"],"size":"774889"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"102894559"},{"id":"sha256:e3db313c6dbc065d4ac3b32c7
a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"18834488"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-022391"],"size":"10823156"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-022391 image ls --format json --alsologtostderr:
I0313 23:42:51.705418   20909 out.go:291] Setting OutFile to fd 1 ...
I0313 23:42:51.705528   20909 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0313 23:42:51.705537   20909 out.go:304] Setting ErrFile to fd 2...
I0313 23:42:51.705541   20909 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0313 23:42:51.705723   20909 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4922/.minikube/bin
I0313 23:42:51.706239   20909 config.go:182] Loaded profile config "functional-022391": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0313 23:42:51.706327   20909 config.go:182] Loaded profile config "functional-022391": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0313 23:42:51.706656   20909 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0313 23:42:51.706691   20909 main.go:141] libmachine: Launching plugin server for driver kvm2
I0313 23:42:51.720901   20909 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40593
I0313 23:42:51.721338   20909 main.go:141] libmachine: () Calling .GetVersion
I0313 23:42:51.721816   20909 main.go:141] libmachine: Using API Version  1
I0313 23:42:51.721835   20909 main.go:141] libmachine: () Calling .SetConfigRaw
I0313 23:42:51.722157   20909 main.go:141] libmachine: () Calling .GetMachineName
I0313 23:42:51.722329   20909 main.go:141] libmachine: (functional-022391) Calling .GetState
I0313 23:42:51.723906   20909 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0313 23:42:51.723942   20909 main.go:141] libmachine: Launching plugin server for driver kvm2
I0313 23:42:51.737710   20909 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40169
I0313 23:42:51.738092   20909 main.go:141] libmachine: () Calling .GetVersion
I0313 23:42:51.738509   20909 main.go:141] libmachine: Using API Version  1
I0313 23:42:51.738533   20909 main.go:141] libmachine: () Calling .SetConfigRaw
I0313 23:42:51.738849   20909 main.go:141] libmachine: () Calling .GetMachineName
I0313 23:42:51.738986   20909 main.go:141] libmachine: (functional-022391) Calling .DriverName
I0313 23:42:51.739198   20909 ssh_runner.go:195] Run: systemctl --version
I0313 23:42:51.739223   20909 main.go:141] libmachine: (functional-022391) Calling .GetSSHHostname
I0313 23:42:51.741553   20909 main.go:141] libmachine: (functional-022391) DBG | domain functional-022391 has defined MAC address 52:54:00:1c:3a:e5 in network mk-functional-022391
I0313 23:42:51.741917   20909 main.go:141] libmachine: (functional-022391) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:3a:e5", ip: ""} in network mk-functional-022391: {Iface:virbr1 ExpiryTime:2024-03-14 00:39:08 +0000 UTC Type:0 Mac:52:54:00:1c:3a:e5 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:functional-022391 Clientid:01:52:54:00:1c:3a:e5}
I0313 23:42:51.741953   20909 main.go:141] libmachine: (functional-022391) DBG | domain functional-022391 has defined IP address 192.168.39.159 and MAC address 52:54:00:1c:3a:e5 in network mk-functional-022391
I0313 23:42:51.742014   20909 main.go:141] libmachine: (functional-022391) Calling .GetSSHPort
I0313 23:42:51.742166   20909 main.go:141] libmachine: (functional-022391) Calling .GetSSHKeyPath
I0313 23:42:51.742292   20909 main.go:141] libmachine: (functional-022391) Calling .GetSSHUsername
I0313 23:42:51.742414   20909 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4922/.minikube/machines/functional-022391/id_rsa Username:docker}
I0313 23:42:51.829993   20909 ssh_runner.go:195] Run: sudo crictl images --output json
I0313 23:42:51.867772   20909 main.go:141] libmachine: Making call to close driver server
I0313 23:42:51.867783   20909 main.go:141] libmachine: (functional-022391) Calling .Close
I0313 23:42:51.868028   20909 main.go:141] libmachine: Successfully made call to close driver server
I0313 23:42:51.868046   20909 main.go:141] libmachine: Making call to close connection to plugin binary
I0313 23:42:51.868054   20909 main.go:141] libmachine: Making call to close driver server
I0313 23:42:51.868061   20909 main.go:141] libmachine: (functional-022391) Calling .Close
I0313 23:42:51.868295   20909 main.go:141] libmachine: Successfully made call to close driver server
I0313 23:42:51.868349   20909 main.go:141] libmachine: Making call to close connection to plugin binary
I0313 23:42:51.868311   20909 main.go:141] libmachine: (functional-022391) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-022391 image ls --format yaml --alsologtostderr:
- id: sha256:ceb173f86737749731ecf274f019b74b312a532b13cfcf38cbf2051ddfd7ef9b
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-022391
size: "1006"
- id: sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "102894559"
- id: sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "24581402"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "27737299"
- id: sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "16190758"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "33420443"
- id: sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "18834488"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "34683820"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-022391
size: "10823156"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-022391 image ls --format yaml --alsologtostderr:
I0313 23:42:46.529089   20788 out.go:291] Setting OutFile to fd 1 ...
I0313 23:42:46.529235   20788 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0313 23:42:46.529245   20788 out.go:304] Setting ErrFile to fd 2...
I0313 23:42:46.529250   20788 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0313 23:42:46.529420   20788 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4922/.minikube/bin
I0313 23:42:46.530105   20788 config.go:182] Loaded profile config "functional-022391": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0313 23:42:46.530211   20788 config.go:182] Loaded profile config "functional-022391": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0313 23:42:46.530848   20788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0313 23:42:46.530915   20788 main.go:141] libmachine: Launching plugin server for driver kvm2
I0313 23:42:46.546158   20788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43167
I0313 23:42:46.546622   20788 main.go:141] libmachine: () Calling .GetVersion
I0313 23:42:46.547262   20788 main.go:141] libmachine: Using API Version  1
I0313 23:42:46.547288   20788 main.go:141] libmachine: () Calling .SetConfigRaw
I0313 23:42:46.547624   20788 main.go:141] libmachine: () Calling .GetMachineName
I0313 23:42:46.547813   20788 main.go:141] libmachine: (functional-022391) Calling .GetState
I0313 23:42:46.549773   20788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0313 23:42:46.549828   20788 main.go:141] libmachine: Launching plugin server for driver kvm2
I0313 23:42:46.564069   20788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39759
I0313 23:42:46.564453   20788 main.go:141] libmachine: () Calling .GetVersion
I0313 23:42:46.564905   20788 main.go:141] libmachine: Using API Version  1
I0313 23:42:46.564928   20788 main.go:141] libmachine: () Calling .SetConfigRaw
I0313 23:42:46.565255   20788 main.go:141] libmachine: () Calling .GetMachineName
I0313 23:42:46.565463   20788 main.go:141] libmachine: (functional-022391) Calling .DriverName
I0313 23:42:46.565686   20788 ssh_runner.go:195] Run: systemctl --version
I0313 23:42:46.565709   20788 main.go:141] libmachine: (functional-022391) Calling .GetSSHHostname
I0313 23:42:46.568450   20788 main.go:141] libmachine: (functional-022391) DBG | domain functional-022391 has defined MAC address 52:54:00:1c:3a:e5 in network mk-functional-022391
I0313 23:42:46.568846   20788 main.go:141] libmachine: (functional-022391) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:3a:e5", ip: ""} in network mk-functional-022391: {Iface:virbr1 ExpiryTime:2024-03-14 00:39:08 +0000 UTC Type:0 Mac:52:54:00:1c:3a:e5 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:functional-022391 Clientid:01:52:54:00:1c:3a:e5}
I0313 23:42:46.568875   20788 main.go:141] libmachine: (functional-022391) DBG | domain functional-022391 has defined IP address 192.168.39.159 and MAC address 52:54:00:1c:3a:e5 in network mk-functional-022391
I0313 23:42:46.568965   20788 main.go:141] libmachine: (functional-022391) Calling .GetSSHPort
I0313 23:42:46.569133   20788 main.go:141] libmachine: (functional-022391) Calling .GetSSHKeyPath
I0313 23:42:46.569298   20788 main.go:141] libmachine: (functional-022391) Calling .GetSSHUsername
I0313 23:42:46.569456   20788 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4922/.minikube/machines/functional-022391/id_rsa Username:docker}
I0313 23:42:46.650747   20788 ssh_runner.go:195] Run: sudo crictl images --output json
I0313 23:42:46.758855   20788 main.go:141] libmachine: Making call to close driver server
I0313 23:42:46.758875   20788 main.go:141] libmachine: (functional-022391) Calling .Close
I0313 23:42:46.759234   20788 main.go:141] libmachine: Successfully made call to close driver server
I0313 23:42:46.759230   20788 main.go:141] libmachine: (functional-022391) DBG | Closing plugin on server side
I0313 23:42:46.759272   20788 main.go:141] libmachine: Making call to close connection to plugin binary
I0313 23:42:46.759283   20788 main.go:141] libmachine: Making call to close driver server
I0313 23:42:46.759290   20788 main.go:141] libmachine: (functional-022391) Calling .Close
I0313 23:42:46.759524   20788 main.go:141] libmachine: Successfully made call to close driver server
I0313 23:42:46.759542   20788 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-022391 ssh pgrep buildkitd: exit status 1 (199.798703ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 image build -t localhost/my-image:functional-022391 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-022391 image build -t localhost/my-image:functional-022391 testdata/build --alsologtostderr: (4.465170142s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-022391 image build -t localhost/my-image:functional-022391 testdata/build --alsologtostderr:
I0313 23:42:47.014801   20842 out.go:291] Setting OutFile to fd 1 ...
I0313 23:42:47.015173   20842 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0313 23:42:47.015215   20842 out.go:304] Setting ErrFile to fd 2...
I0313 23:42:47.015233   20842 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0313 23:42:47.015680   20842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4922/.minikube/bin
I0313 23:42:47.016658   20842 config.go:182] Loaded profile config "functional-022391": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0313 23:42:47.017137   20842 config.go:182] Loaded profile config "functional-022391": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0313 23:42:47.017490   20842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0313 23:42:47.017549   20842 main.go:141] libmachine: Launching plugin server for driver kvm2
I0313 23:42:47.031931   20842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38689
I0313 23:42:47.032360   20842 main.go:141] libmachine: () Calling .GetVersion
I0313 23:42:47.032916   20842 main.go:141] libmachine: Using API Version  1
I0313 23:42:47.032937   20842 main.go:141] libmachine: () Calling .SetConfigRaw
I0313 23:42:47.033290   20842 main.go:141] libmachine: () Calling .GetMachineName
I0313 23:42:47.033450   20842 main.go:141] libmachine: (functional-022391) Calling .GetState
I0313 23:42:47.035368   20842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0313 23:42:47.035410   20842 main.go:141] libmachine: Launching plugin server for driver kvm2
I0313 23:42:47.049529   20842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33193
I0313 23:42:47.049869   20842 main.go:141] libmachine: () Calling .GetVersion
I0313 23:42:47.050323   20842 main.go:141] libmachine: Using API Version  1
I0313 23:42:47.050342   20842 main.go:141] libmachine: () Calling .SetConfigRaw
I0313 23:42:47.050675   20842 main.go:141] libmachine: () Calling .GetMachineName
I0313 23:42:47.050880   20842 main.go:141] libmachine: (functional-022391) Calling .DriverName
I0313 23:42:47.051106   20842 ssh_runner.go:195] Run: systemctl --version
I0313 23:42:47.051133   20842 main.go:141] libmachine: (functional-022391) Calling .GetSSHHostname
I0313 23:42:47.053856   20842 main.go:141] libmachine: (functional-022391) DBG | domain functional-022391 has defined MAC address 52:54:00:1c:3a:e5 in network mk-functional-022391
I0313 23:42:47.054222   20842 main.go:141] libmachine: (functional-022391) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:3a:e5", ip: ""} in network mk-functional-022391: {Iface:virbr1 ExpiryTime:2024-03-14 00:39:08 +0000 UTC Type:0 Mac:52:54:00:1c:3a:e5 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:functional-022391 Clientid:01:52:54:00:1c:3a:e5}
I0313 23:42:47.054248   20842 main.go:141] libmachine: (functional-022391) DBG | domain functional-022391 has defined IP address 192.168.39.159 and MAC address 52:54:00:1c:3a:e5 in network mk-functional-022391
I0313 23:42:47.054347   20842 main.go:141] libmachine: (functional-022391) Calling .GetSSHPort
I0313 23:42:47.054509   20842 main.go:141] libmachine: (functional-022391) Calling .GetSSHKeyPath
I0313 23:42:47.054661   20842 main.go:141] libmachine: (functional-022391) Calling .GetSSHUsername
I0313 23:42:47.054788   20842 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4922/.minikube/machines/functional-022391/id_rsa Username:docker}
I0313 23:42:47.136240   20842 build_images.go:161] Building image from path: /tmp/build.4203378270.tar
I0313 23:42:47.136303   20842 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0313 23:42:47.149250   20842 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4203378270.tar
I0313 23:42:47.159173   20842 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4203378270.tar: stat -c "%s %y" /var/lib/minikube/build/build.4203378270.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4203378270.tar': No such file or directory
I0313 23:42:47.159213   20842 ssh_runner.go:362] scp /tmp/build.4203378270.tar --> /var/lib/minikube/build/build.4203378270.tar (3072 bytes)
I0313 23:42:47.200643   20842 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4203378270
I0313 23:42:47.212820   20842 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4203378270 -xf /var/lib/minikube/build/build.4203378270.tar
I0313 23:42:47.224304   20842 containerd.go:379] Building image: /var/lib/minikube/build/build.4203378270
I0313 23:42:47.224376   20842 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4203378270 --local dockerfile=/var/lib/minikube/build/build.4203378270 --output type=image,name=localhost/my-image:functional-022391
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.1s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 1.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:361951a3963cbcf41f391ae97e4481180a5472a489a1e0ae101825e93e379031
#8 exporting manifest sha256:361951a3963cbcf41f391ae97e4481180a5472a489a1e0ae101825e93e379031 0.0s done
#8 exporting config sha256:1352172ff218ed8e593c7a3324fa32c034bf5fc76995071ecd13e6b63e8cb914 0.0s done
#8 naming to localhost/my-image:functional-022391 done
#8 DONE 0.3s
I0313 23:42:51.395865   20842 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4203378270 --local dockerfile=/var/lib/minikube/build/build.4203378270 --output type=image,name=localhost/my-image:functional-022391: (4.171459287s)
I0313 23:42:51.395948   20842 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4203378270
I0313 23:42:51.411217   20842 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4203378270.tar
I0313 23:42:51.424036   20842 build_images.go:217] Built localhost/my-image:functional-022391 from /tmp/build.4203378270.tar
I0313 23:42:51.424070   20842 build_images.go:133] succeeded building to: functional-022391
I0313 23:42:51.424076   20842 build_images.go:134] failed building to: 
I0313 23:42:51.424102   20842 main.go:141] libmachine: Making call to close driver server
I0313 23:42:51.424117   20842 main.go:141] libmachine: (functional-022391) Calling .Close
I0313 23:42:51.424424   20842 main.go:141] libmachine: (functional-022391) DBG | Closing plugin on server side
I0313 23:42:51.424425   20842 main.go:141] libmachine: Successfully made call to close driver server
I0313 23:42:51.424468   20842 main.go:141] libmachine: Making call to close connection to plugin binary
I0313 23:42:51.424488   20842 main.go:141] libmachine: Making call to close driver server
I0313 23:42:51.424499   20842 main.go:141] libmachine: (functional-022391) Calling .Close
I0313 23:42:51.424699   20842 main.go:141] libmachine: Successfully made call to close driver server
I0313 23:42:51.424713   20842 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
E0313 23:42:16.663787   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt: no such file or directory
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.668572406s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-022391
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 image load --daemon gcr.io/google-containers/addon-resizer:functional-022391 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-022391 image load --daemon gcr.io/google-containers/addon-resizer:functional-022391 --alsologtostderr: (5.375900974s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 image load --daemon gcr.io/google-containers/addon-resizer:functional-022391 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-022391 image load --daemon gcr.io/google-containers/addon-resizer:functional-022391 --alsologtostderr: (2.938550334s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-022391 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-022391 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-rsrjp" [17901c4d-300c-4f8b-9f61-59ee9bb087fb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-rsrjp" [17901c4d-300c-4f8b-9f61-59ee9bb087fb] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.009928295s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.612368519s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-022391
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 image load --daemon gcr.io/google-containers/addon-resizer:functional-022391 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-022391 image load --daemon gcr.io/google-containers/addon-resizer:functional-022391 --alsologtostderr: (4.707412651s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 service list -o json
functional_test.go:1490: Took "480.824322ms" to run "out/minikube-linux-amd64 -p functional-022391 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.159:32672
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.159:32672
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 image save gcr.io/google-containers/addon-resizer:functional-022391 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-022391 image save gcr.io/google-containers/addon-resizer:functional-022391 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.70469838s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "321.970086ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "60.979754ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "332.217876ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "60.491265ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (18.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-022391 /tmp/TestFunctionalparallelMountCmdany-port1289577499/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1710373355862835238" to /tmp/TestFunctionalparallelMountCmdany-port1289577499/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1710373355862835238" to /tmp/TestFunctionalparallelMountCmdany-port1289577499/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1710373355862835238" to /tmp/TestFunctionalparallelMountCmdany-port1289577499/001/test-1710373355862835238
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-022391 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (352.726226ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 13 23:42 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 13 23:42 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 13 23:42 test-1710373355862835238
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 ssh cat /mount-9p/test-1710373355862835238
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-022391 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [c28ed6d9-fe99-4cfd-b866-c2ebae2e6bba] Pending
helpers_test.go:344: "busybox-mount" [c28ed6d9-fe99-4cfd-b866-c2ebae2e6bba] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [c28ed6d9-fe99-4cfd-b866-c2ebae2e6bba] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [c28ed6d9-fe99-4cfd-b866-c2ebae2e6bba] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 16.004350914s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-022391 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-022391 /tmp/TestFunctionalparallelMountCmdany-port1289577499/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (18.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 image rm gcr.io/google-containers/addon-resizer:functional-022391 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-022391 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (2.091378071s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-022391
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 image save --daemon gcr.io/google-containers/addon-resizer:functional-022391 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-022391 image save --daemon gcr.io/google-containers/addon-resizer:functional-022391 --alsologtostderr: (1.261559355s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-022391
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-022391 /tmp/TestFunctionalparallelMountCmdspecific-port250800138/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-022391 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (245.616345ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-022391 /tmp/TestFunctionalparallelMountCmdspecific-port250800138/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-022391 ssh "sudo umount -f /mount-9p": exit status 1 (202.506555ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-022391 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-022391 /tmp/TestFunctionalparallelMountCmdspecific-port250800138/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-022391 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4280235415/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-022391 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4280235415/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-022391 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4280235415/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-022391 ssh "findmnt -T" /mount1: exit status 1 (223.377641ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-022391 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-022391 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-022391 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4280235415/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-022391 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4280235415/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-022391 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4280235415/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.56s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-022391
--- PASS: TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-022391
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-022391
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StartCluster (306.06s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-294655 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0313 23:44:32.820289   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt: no such file or directory
E0313 23:45:00.504350   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt: no such file or directory
E0313 23:47:14.075954   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/functional-022391/client.crt: no such file or directory
E0313 23:47:14.081220   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/functional-022391/client.crt: no such file or directory
E0313 23:47:14.091538   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/functional-022391/client.crt: no such file or directory
E0313 23:47:14.111824   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/functional-022391/client.crt: no such file or directory
E0313 23:47:14.152141   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/functional-022391/client.crt: no such file or directory
E0313 23:47:14.232507   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/functional-022391/client.crt: no such file or directory
E0313 23:47:14.392949   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/functional-022391/client.crt: no such file or directory
E0313 23:47:14.713571   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/functional-022391/client.crt: no such file or directory
E0313 23:47:15.353938   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/functional-022391/client.crt: no such file or directory
E0313 23:47:16.634244   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/functional-022391/client.crt: no such file or directory
E0313 23:47:19.195334   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/functional-022391/client.crt: no such file or directory
E0313 23:47:24.316132   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/functional-022391/client.crt: no such file or directory
E0313 23:47:34.556876   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/functional-022391/client.crt: no such file or directory
E0313 23:47:55.037111   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/functional-022391/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-294655 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (5m5.365077492s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/StartCluster (306.06s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeployApp (7.16s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-294655 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-294655 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-294655 -- rollout status deployment/busybox: (4.586431965s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-294655 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-294655 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-294655 -- exec busybox-5b5d89c9d6-8z8g9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-294655 -- exec busybox-5b5d89c9d6-glhd7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-294655 -- exec busybox-5b5d89c9d6-v2p2g -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-294655 -- exec busybox-5b5d89c9d6-8z8g9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-294655 -- exec busybox-5b5d89c9d6-glhd7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-294655 -- exec busybox-5b5d89c9d6-v2p2g -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-294655 -- exec busybox-5b5d89c9d6-8z8g9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-294655 -- exec busybox-5b5d89c9d6-glhd7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-294655 -- exec busybox-5b5d89c9d6-v2p2g -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMutliControlPlane/serial/DeployApp (7.16s)

                                                
                                    
x
+
TestMutliControlPlane/serial/PingHostFromPods (1.35s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-294655 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-294655 -- exec busybox-5b5d89c9d6-8z8g9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-294655 -- exec busybox-5b5d89c9d6-8z8g9 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-294655 -- exec busybox-5b5d89c9d6-glhd7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-294655 -- exec busybox-5b5d89c9d6-glhd7 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-294655 -- exec busybox-5b5d89c9d6-v2p2g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-294655 -- exec busybox-5b5d89c9d6-v2p2g -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMutliControlPlane/serial/PingHostFromPods (1.35s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddWorkerNode (52.36s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-294655 -v=7 --alsologtostderr
E0313 23:48:35.998187   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/functional-022391/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-294655 -v=7 --alsologtostderr: (51.490575928s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/AddWorkerNode (52.36s)

                                                
                                    
x
+
TestMutliControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-294655 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMutliControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterClusterStart (0.58s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterClusterStart (0.58s)

                                                
                                    
x
+
TestMutliControlPlane/serial/CopyFile (13.45s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 cp testdata/cp-test.txt ha-294655:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 ssh -n ha-294655 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 cp ha-294655:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile3750558473/001/cp-test_ha-294655.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 ssh -n ha-294655 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 cp ha-294655:/home/docker/cp-test.txt ha-294655-m02:/home/docker/cp-test_ha-294655_ha-294655-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 ssh -n ha-294655 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 ssh -n ha-294655-m02 "sudo cat /home/docker/cp-test_ha-294655_ha-294655-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 cp ha-294655:/home/docker/cp-test.txt ha-294655-m03:/home/docker/cp-test_ha-294655_ha-294655-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 ssh -n ha-294655 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 ssh -n ha-294655-m03 "sudo cat /home/docker/cp-test_ha-294655_ha-294655-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 cp ha-294655:/home/docker/cp-test.txt ha-294655-m04:/home/docker/cp-test_ha-294655_ha-294655-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 ssh -n ha-294655 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 ssh -n ha-294655-m04 "sudo cat /home/docker/cp-test_ha-294655_ha-294655-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 cp testdata/cp-test.txt ha-294655-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 ssh -n ha-294655-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 cp ha-294655-m02:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile3750558473/001/cp-test_ha-294655-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 ssh -n ha-294655-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 cp ha-294655-m02:/home/docker/cp-test.txt ha-294655:/home/docker/cp-test_ha-294655-m02_ha-294655.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 ssh -n ha-294655-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 ssh -n ha-294655 "sudo cat /home/docker/cp-test_ha-294655-m02_ha-294655.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 cp ha-294655-m02:/home/docker/cp-test.txt ha-294655-m03:/home/docker/cp-test_ha-294655-m02_ha-294655-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 ssh -n ha-294655-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 ssh -n ha-294655-m03 "sudo cat /home/docker/cp-test_ha-294655-m02_ha-294655-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 cp ha-294655-m02:/home/docker/cp-test.txt ha-294655-m04:/home/docker/cp-test_ha-294655-m02_ha-294655-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 ssh -n ha-294655-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 ssh -n ha-294655-m04 "sudo cat /home/docker/cp-test_ha-294655-m02_ha-294655-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 cp testdata/cp-test.txt ha-294655-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 ssh -n ha-294655-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 cp ha-294655-m03:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile3750558473/001/cp-test_ha-294655-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 ssh -n ha-294655-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 cp ha-294655-m03:/home/docker/cp-test.txt ha-294655:/home/docker/cp-test_ha-294655-m03_ha-294655.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 ssh -n ha-294655-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 ssh -n ha-294655 "sudo cat /home/docker/cp-test_ha-294655-m03_ha-294655.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 cp ha-294655-m03:/home/docker/cp-test.txt ha-294655-m02:/home/docker/cp-test_ha-294655-m03_ha-294655-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 ssh -n ha-294655-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 ssh -n ha-294655-m02 "sudo cat /home/docker/cp-test_ha-294655-m03_ha-294655-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 cp ha-294655-m03:/home/docker/cp-test.txt ha-294655-m04:/home/docker/cp-test_ha-294655-m03_ha-294655-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 ssh -n ha-294655-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 ssh -n ha-294655-m04 "sudo cat /home/docker/cp-test_ha-294655-m03_ha-294655-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 cp testdata/cp-test.txt ha-294655-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 ssh -n ha-294655-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 cp ha-294655-m04:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile3750558473/001/cp-test_ha-294655-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 ssh -n ha-294655-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 cp ha-294655-m04:/home/docker/cp-test.txt ha-294655:/home/docker/cp-test_ha-294655-m04_ha-294655.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 ssh -n ha-294655-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 ssh -n ha-294655 "sudo cat /home/docker/cp-test_ha-294655-m04_ha-294655.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 cp ha-294655-m04:/home/docker/cp-test.txt ha-294655-m02:/home/docker/cp-test_ha-294655-m04_ha-294655-m02.txt
E0313 23:49:32.819948   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 ssh -n ha-294655-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 ssh -n ha-294655-m02 "sudo cat /home/docker/cp-test_ha-294655-m04_ha-294655-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 cp ha-294655-m04:/home/docker/cp-test.txt ha-294655-m03:/home/docker/cp-test_ha-294655-m04_ha-294655-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 ssh -n ha-294655-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 ssh -n ha-294655-m03 "sudo cat /home/docker/cp-test_ha-294655-m04_ha-294655-m03.txt"
--- PASS: TestMutliControlPlane/serial/CopyFile (13.45s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopSecondaryNode (92.4s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 node stop m02 -v=7 --alsologtostderr
E0313 23:49:57.919077   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/functional-022391/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-294655 node stop m02 -v=7 --alsologtostderr: (1m31.75312783s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-294655 status -v=7 --alsologtostderr: exit status 7 (641.440167ms)

                                                
                                                
-- stdout --
	ha-294655
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-294655-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-294655-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-294655-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0313 23:51:06.162186   25901 out.go:291] Setting OutFile to fd 1 ...
	I0313 23:51:06.162294   25901 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:51:06.162303   25901 out.go:304] Setting ErrFile to fd 2...
	I0313 23:51:06.162307   25901 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0313 23:51:06.162506   25901 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4922/.minikube/bin
	I0313 23:51:06.162653   25901 out.go:298] Setting JSON to false
	I0313 23:51:06.162676   25901 mustload.go:65] Loading cluster: ha-294655
	I0313 23:51:06.162801   25901 notify.go:220] Checking for updates...
	I0313 23:51:06.163007   25901 config.go:182] Loaded profile config "ha-294655": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0313 23:51:06.163020   25901 status.go:255] checking status of ha-294655 ...
	I0313 23:51:06.163406   25901 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:51:06.163459   25901 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:51:06.182930   25901 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35671
	I0313 23:51:06.183310   25901 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:51:06.183898   25901 main.go:141] libmachine: Using API Version  1
	I0313 23:51:06.183935   25901 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:51:06.184426   25901 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:51:06.184656   25901 main.go:141] libmachine: (ha-294655) Calling .GetState
	I0313 23:51:06.186230   25901 status.go:330] ha-294655 host status = "Running" (err=<nil>)
	I0313 23:51:06.186246   25901 host.go:66] Checking if "ha-294655" exists ...
	I0313 23:51:06.186594   25901 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:51:06.186627   25901 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:51:06.200292   25901 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42015
	I0313 23:51:06.200680   25901 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:51:06.201137   25901 main.go:141] libmachine: Using API Version  1
	I0313 23:51:06.201165   25901 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:51:06.201441   25901 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:51:06.201589   25901 main.go:141] libmachine: (ha-294655) Calling .GetIP
	I0313 23:51:06.204209   25901 main.go:141] libmachine: (ha-294655) DBG | domain ha-294655 has defined MAC address 52:54:00:6b:37:6e in network mk-ha-294655
	I0313 23:51:06.204613   25901 main.go:141] libmachine: (ha-294655) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:37:6e", ip: ""} in network mk-ha-294655: {Iface:virbr1 ExpiryTime:2024-03-14 00:43:29 +0000 UTC Type:0 Mac:52:54:00:6b:37:6e Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-294655 Clientid:01:52:54:00:6b:37:6e}
	I0313 23:51:06.204648   25901 main.go:141] libmachine: (ha-294655) DBG | domain ha-294655 has defined IP address 192.168.39.118 and MAC address 52:54:00:6b:37:6e in network mk-ha-294655
	I0313 23:51:06.204739   25901 host.go:66] Checking if "ha-294655" exists ...
	I0313 23:51:06.205003   25901 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:51:06.205035   25901 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:51:06.218368   25901 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39953
	I0313 23:51:06.218715   25901 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:51:06.219158   25901 main.go:141] libmachine: Using API Version  1
	I0313 23:51:06.219176   25901 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:51:06.219446   25901 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:51:06.219583   25901 main.go:141] libmachine: (ha-294655) Calling .DriverName
	I0313 23:51:06.219776   25901 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0313 23:51:06.219809   25901 main.go:141] libmachine: (ha-294655) Calling .GetSSHHostname
	I0313 23:51:06.222278   25901 main.go:141] libmachine: (ha-294655) DBG | domain ha-294655 has defined MAC address 52:54:00:6b:37:6e in network mk-ha-294655
	I0313 23:51:06.222627   25901 main.go:141] libmachine: (ha-294655) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:37:6e", ip: ""} in network mk-ha-294655: {Iface:virbr1 ExpiryTime:2024-03-14 00:43:29 +0000 UTC Type:0 Mac:52:54:00:6b:37:6e Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-294655 Clientid:01:52:54:00:6b:37:6e}
	I0313 23:51:06.222661   25901 main.go:141] libmachine: (ha-294655) DBG | domain ha-294655 has defined IP address 192.168.39.118 and MAC address 52:54:00:6b:37:6e in network mk-ha-294655
	I0313 23:51:06.222742   25901 main.go:141] libmachine: (ha-294655) Calling .GetSSHPort
	I0313 23:51:06.222919   25901 main.go:141] libmachine: (ha-294655) Calling .GetSSHKeyPath
	I0313 23:51:06.223045   25901 main.go:141] libmachine: (ha-294655) Calling .GetSSHUsername
	I0313 23:51:06.223161   25901 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4922/.minikube/machines/ha-294655/id_rsa Username:docker}
	I0313 23:51:06.309577   25901 ssh_runner.go:195] Run: systemctl --version
	I0313 23:51:06.318711   25901 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:51:06.334262   25901 kubeconfig.go:125] found "ha-294655" server: "https://192.168.39.254:8443"
	I0313 23:51:06.334286   25901 api_server.go:166] Checking apiserver status ...
	I0313 23:51:06.334321   25901 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0313 23:51:06.349421   25901 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup
	W0313 23:51:06.359313   25901 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0313 23:51:06.359374   25901 ssh_runner.go:195] Run: ls
	I0313 23:51:06.364753   25901 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0313 23:51:06.369662   25901 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0313 23:51:06.369685   25901 status.go:422] ha-294655 apiserver status = Running (err=<nil>)
	I0313 23:51:06.369698   25901 status.go:257] ha-294655 status: &{Name:ha-294655 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0313 23:51:06.369725   25901 status.go:255] checking status of ha-294655-m02 ...
	I0313 23:51:06.370088   25901 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:51:06.370127   25901 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:51:06.384577   25901 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42825
	I0313 23:51:06.384926   25901 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:51:06.385365   25901 main.go:141] libmachine: Using API Version  1
	I0313 23:51:06.385387   25901 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:51:06.385703   25901 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:51:06.385889   25901 main.go:141] libmachine: (ha-294655-m02) Calling .GetState
	I0313 23:51:06.387311   25901 status.go:330] ha-294655-m02 host status = "Stopped" (err=<nil>)
	I0313 23:51:06.387324   25901 status.go:343] host is not running, skipping remaining checks
	I0313 23:51:06.387329   25901 status.go:257] ha-294655-m02 status: &{Name:ha-294655-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0313 23:51:06.387340   25901 status.go:255] checking status of ha-294655-m03 ...
	I0313 23:51:06.387618   25901 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:51:06.387651   25901 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:51:06.400769   25901 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39877
	I0313 23:51:06.401074   25901 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:51:06.401481   25901 main.go:141] libmachine: Using API Version  1
	I0313 23:51:06.401502   25901 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:51:06.401792   25901 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:51:06.401941   25901 main.go:141] libmachine: (ha-294655-m03) Calling .GetState
	I0313 23:51:06.403362   25901 status.go:330] ha-294655-m03 host status = "Running" (err=<nil>)
	I0313 23:51:06.403383   25901 host.go:66] Checking if "ha-294655-m03" exists ...
	I0313 23:51:06.403631   25901 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:51:06.403660   25901 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:51:06.416986   25901 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43755
	I0313 23:51:06.417307   25901 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:51:06.417715   25901 main.go:141] libmachine: Using API Version  1
	I0313 23:51:06.417731   25901 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:51:06.418015   25901 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:51:06.418177   25901 main.go:141] libmachine: (ha-294655-m03) Calling .GetIP
	I0313 23:51:06.420603   25901 main.go:141] libmachine: (ha-294655-m03) DBG | domain ha-294655-m03 has defined MAC address 52:54:00:c2:8c:53 in network mk-ha-294655
	I0313 23:51:06.421003   25901 main.go:141] libmachine: (ha-294655-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:8c:53", ip: ""} in network mk-ha-294655: {Iface:virbr1 ExpiryTime:2024-03-14 00:47:20 +0000 UTC Type:0 Mac:52:54:00:c2:8c:53 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-294655-m03 Clientid:01:52:54:00:c2:8c:53}
	I0313 23:51:06.421045   25901 main.go:141] libmachine: (ha-294655-m03) DBG | domain ha-294655-m03 has defined IP address 192.168.39.65 and MAC address 52:54:00:c2:8c:53 in network mk-ha-294655
	I0313 23:51:06.421191   25901 host.go:66] Checking if "ha-294655-m03" exists ...
	I0313 23:51:06.423115   25901 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:51:06.423162   25901 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:51:06.436502   25901 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40469
	I0313 23:51:06.436879   25901 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:51:06.437260   25901 main.go:141] libmachine: Using API Version  1
	I0313 23:51:06.437283   25901 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:51:06.437546   25901 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:51:06.437680   25901 main.go:141] libmachine: (ha-294655-m03) Calling .DriverName
	I0313 23:51:06.437884   25901 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0313 23:51:06.437905   25901 main.go:141] libmachine: (ha-294655-m03) Calling .GetSSHHostname
	I0313 23:51:06.440251   25901 main.go:141] libmachine: (ha-294655-m03) DBG | domain ha-294655-m03 has defined MAC address 52:54:00:c2:8c:53 in network mk-ha-294655
	I0313 23:51:06.440620   25901 main.go:141] libmachine: (ha-294655-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:8c:53", ip: ""} in network mk-ha-294655: {Iface:virbr1 ExpiryTime:2024-03-14 00:47:20 +0000 UTC Type:0 Mac:52:54:00:c2:8c:53 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-294655-m03 Clientid:01:52:54:00:c2:8c:53}
	I0313 23:51:06.440643   25901 main.go:141] libmachine: (ha-294655-m03) DBG | domain ha-294655-m03 has defined IP address 192.168.39.65 and MAC address 52:54:00:c2:8c:53 in network mk-ha-294655
	I0313 23:51:06.440764   25901 main.go:141] libmachine: (ha-294655-m03) Calling .GetSSHPort
	I0313 23:51:06.440924   25901 main.go:141] libmachine: (ha-294655-m03) Calling .GetSSHKeyPath
	I0313 23:51:06.441053   25901 main.go:141] libmachine: (ha-294655-m03) Calling .GetSSHUsername
	I0313 23:51:06.441201   25901 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4922/.minikube/machines/ha-294655-m03/id_rsa Username:docker}
	I0313 23:51:06.529206   25901 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:51:06.548042   25901 kubeconfig.go:125] found "ha-294655" server: "https://192.168.39.254:8443"
	I0313 23:51:06.548065   25901 api_server.go:166] Checking apiserver status ...
	I0313 23:51:06.548098   25901 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0313 23:51:06.562765   25901 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1262/cgroup
	W0313 23:51:06.572973   25901 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1262/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0313 23:51:06.573016   25901 ssh_runner.go:195] Run: ls
	I0313 23:51:06.577859   25901 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0313 23:51:06.582723   25901 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0313 23:51:06.582740   25901 status.go:422] ha-294655-m03 apiserver status = Running (err=<nil>)
	I0313 23:51:06.582747   25901 status.go:257] ha-294655-m03 status: &{Name:ha-294655-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0313 23:51:06.582760   25901 status.go:255] checking status of ha-294655-m04 ...
	I0313 23:51:06.583102   25901 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:51:06.583138   25901 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:51:06.598479   25901 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39033
	I0313 23:51:06.598844   25901 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:51:06.599278   25901 main.go:141] libmachine: Using API Version  1
	I0313 23:51:06.599301   25901 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:51:06.599617   25901 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:51:06.599807   25901 main.go:141] libmachine: (ha-294655-m04) Calling .GetState
	I0313 23:51:06.601505   25901 status.go:330] ha-294655-m04 host status = "Running" (err=<nil>)
	I0313 23:51:06.601515   25901 host.go:66] Checking if "ha-294655-m04" exists ...
	I0313 23:51:06.601766   25901 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:51:06.601796   25901 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:51:06.616687   25901 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45699
	I0313 23:51:06.617039   25901 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:51:06.617472   25901 main.go:141] libmachine: Using API Version  1
	I0313 23:51:06.617495   25901 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:51:06.617784   25901 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:51:06.617950   25901 main.go:141] libmachine: (ha-294655-m04) Calling .GetIP
	I0313 23:51:06.620668   25901 main.go:141] libmachine: (ha-294655-m04) DBG | domain ha-294655-m04 has defined MAC address 52:54:00:d7:36:eb in network mk-ha-294655
	I0313 23:51:06.621067   25901 main.go:141] libmachine: (ha-294655-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:36:eb", ip: ""} in network mk-ha-294655: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:44 +0000 UTC Type:0 Mac:52:54:00:d7:36:eb Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-294655-m04 Clientid:01:52:54:00:d7:36:eb}
	I0313 23:51:06.621096   25901 main.go:141] libmachine: (ha-294655-m04) DBG | domain ha-294655-m04 has defined IP address 192.168.39.164 and MAC address 52:54:00:d7:36:eb in network mk-ha-294655
	I0313 23:51:06.621240   25901 host.go:66] Checking if "ha-294655-m04" exists ...
	I0313 23:51:06.621523   25901 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0313 23:51:06.621563   25901 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0313 23:51:06.637572   25901 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43891
	I0313 23:51:06.637954   25901 main.go:141] libmachine: () Calling .GetVersion
	I0313 23:51:06.638435   25901 main.go:141] libmachine: Using API Version  1
	I0313 23:51:06.638453   25901 main.go:141] libmachine: () Calling .SetConfigRaw
	I0313 23:51:06.638737   25901 main.go:141] libmachine: () Calling .GetMachineName
	I0313 23:51:06.638900   25901 main.go:141] libmachine: (ha-294655-m04) Calling .DriverName
	I0313 23:51:06.639113   25901 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0313 23:51:06.639132   25901 main.go:141] libmachine: (ha-294655-m04) Calling .GetSSHHostname
	I0313 23:51:06.641771   25901 main.go:141] libmachine: (ha-294655-m04) DBG | domain ha-294655-m04 has defined MAC address 52:54:00:d7:36:eb in network mk-ha-294655
	I0313 23:51:06.642189   25901 main.go:141] libmachine: (ha-294655-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:36:eb", ip: ""} in network mk-ha-294655: {Iface:virbr1 ExpiryTime:2024-03-14 00:48:44 +0000 UTC Type:0 Mac:52:54:00:d7:36:eb Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-294655-m04 Clientid:01:52:54:00:d7:36:eb}
	I0313 23:51:06.642217   25901 main.go:141] libmachine: (ha-294655-m04) DBG | domain ha-294655-m04 has defined IP address 192.168.39.164 and MAC address 52:54:00:d7:36:eb in network mk-ha-294655
	I0313 23:51:06.642301   25901 main.go:141] libmachine: (ha-294655-m04) Calling .GetSSHPort
	I0313 23:51:06.642477   25901 main.go:141] libmachine: (ha-294655-m04) Calling .GetSSHKeyPath
	I0313 23:51:06.642642   25901 main.go:141] libmachine: (ha-294655-m04) Calling .GetSSHUsername
	I0313 23:51:06.642812   25901 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4922/.minikube/machines/ha-294655-m04/id_rsa Username:docker}
	I0313 23:51:06.727931   25901 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0313 23:51:06.746172   25901 status.go:257] ha-294655-m04 status: &{Name:ha-294655-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMutliControlPlane/serial/StopSecondaryNode (92.40s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.4s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.40s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartSecondaryNode (43.64s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-294655 node start m02 -v=7 --alsologtostderr: (42.71120845s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMutliControlPlane/serial/RestartSecondaryNode (43.64s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.55s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.55s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartClusterKeepsNodes (477.46s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-294655 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-294655 -v=7 --alsologtostderr
E0313 23:52:14.076226   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/functional-022391/client.crt: no such file or directory
E0313 23:52:41.759444   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/functional-022391/client.crt: no such file or directory
E0313 23:54:32.820139   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt: no such file or directory
E0313 23:55:55.864978   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-294655 -v=7 --alsologtostderr: (4m38.014386148s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-294655 --wait=true -v=7 --alsologtostderr
E0313 23:57:14.076263   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/functional-022391/client.crt: no such file or directory
E0313 23:59:32.820018   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-294655 --wait=true -v=7 --alsologtostderr: (3m19.326422018s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-294655
--- PASS: TestMutliControlPlane/serial/RestartClusterKeepsNodes (477.46s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeleteSecondaryNode (8.36s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-294655 node delete m03 -v=7 --alsologtostderr: (7.561407097s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMutliControlPlane/serial/DeleteSecondaryNode (8.36s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.4s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.40s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopCluster (276.48s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 stop -v=7 --alsologtostderr
E0314 00:02:14.076215   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/functional-022391/client.crt: no such file or directory
E0314 00:03:37.120435   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/functional-022391/client.crt: no such file or directory
E0314 00:04:32.820335   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-294655 stop -v=7 --alsologtostderr: (4m36.367185966s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-294655 status -v=7 --alsologtostderr: exit status 7 (108.144356ms)

                                                
                                                
-- stdout --
	ha-294655
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-294655-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-294655-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 00:04:33.984521   29026 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:04:33.984796   29026 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:04:33.984805   29026 out.go:304] Setting ErrFile to fd 2...
	I0314 00:04:33.984809   29026 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:04:33.985053   29026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4922/.minikube/bin
	I0314 00:04:33.985280   29026 out.go:298] Setting JSON to false
	I0314 00:04:33.985307   29026 mustload.go:65] Loading cluster: ha-294655
	I0314 00:04:33.985350   29026 notify.go:220] Checking for updates...
	I0314 00:04:33.985720   29026 config.go:182] Loaded profile config "ha-294655": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0314 00:04:33.985735   29026 status.go:255] checking status of ha-294655 ...
	I0314 00:04:33.986111   29026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0314 00:04:33.986193   29026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:04:34.000604   29026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45019
	I0314 00:04:34.000960   29026 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:04:34.001470   29026 main.go:141] libmachine: Using API Version  1
	I0314 00:04:34.001495   29026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:04:34.001879   29026 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:04:34.002100   29026 main.go:141] libmachine: (ha-294655) Calling .GetState
	I0314 00:04:34.003650   29026 status.go:330] ha-294655 host status = "Stopped" (err=<nil>)
	I0314 00:04:34.003665   29026 status.go:343] host is not running, skipping remaining checks
	I0314 00:04:34.003673   29026 status.go:257] ha-294655 status: &{Name:ha-294655 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 00:04:34.003712   29026 status.go:255] checking status of ha-294655-m02 ...
	I0314 00:04:34.004123   29026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0314 00:04:34.004165   29026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:04:34.018453   29026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44305
	I0314 00:04:34.018761   29026 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:04:34.019186   29026 main.go:141] libmachine: Using API Version  1
	I0314 00:04:34.019212   29026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:04:34.019478   29026 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:04:34.019654   29026 main.go:141] libmachine: (ha-294655-m02) Calling .GetState
	I0314 00:04:34.021099   29026 status.go:330] ha-294655-m02 host status = "Stopped" (err=<nil>)
	I0314 00:04:34.021115   29026 status.go:343] host is not running, skipping remaining checks
	I0314 00:04:34.021122   29026 status.go:257] ha-294655-m02 status: &{Name:ha-294655-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 00:04:34.021138   29026 status.go:255] checking status of ha-294655-m04 ...
	I0314 00:04:34.021392   29026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0314 00:04:34.021440   29026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:04:34.035106   29026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38265
	I0314 00:04:34.035522   29026 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:04:34.035999   29026 main.go:141] libmachine: Using API Version  1
	I0314 00:04:34.036021   29026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:04:34.036284   29026 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:04:34.036466   29026 main.go:141] libmachine: (ha-294655-m04) Calling .GetState
	I0314 00:04:34.037961   29026 status.go:330] ha-294655-m04 host status = "Stopped" (err=<nil>)
	I0314 00:04:34.037977   29026 status.go:343] host is not running, skipping remaining checks
	I0314 00:04:34.037983   29026 status.go:257] ha-294655-m04 status: &{Name:ha-294655-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMutliControlPlane/serial/StopCluster (276.48s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartCluster (160.71s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-294655 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-294655 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m39.964792052s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 status -v=7 --alsologtostderr
E0314 00:07:14.075515   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/functional-022391/client.crt: no such file or directory
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMutliControlPlane/serial/RestartCluster (160.71s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddSecondaryNode (104.29s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-294655 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-294655 --control-plane -v=7 --alsologtostderr: (1m43.424336248s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-294655 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/AddSecondaryNode (104.29s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.57s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.57s)

                                                
                                    
x
+
TestJSONOutput/start/Command (101.96s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-811584 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E0314 00:09:32.820253   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-811584 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m41.962124166s)
--- PASS: TestJSONOutput/start/Command (101.96s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-811584 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-811584 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.33s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-811584 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-811584 --output=json --user=testUser: (7.327582382s)
--- PASS: TestJSONOutput/stop/Command (7.33s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-224062 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-224062 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (75.966571ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"922e40cc-6f20-4fc9-90da-2a7e17c7eb4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-224062] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9f35e80c-a837-412c-ac2b-537206fcde18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18375"}}
	{"specversion":"1.0","id":"0beda645-46a6-43a9-b634-7909ff4bbab0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"014b5ca3-0093-4cee-8ac9-42687b028192","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18375-4922/kubeconfig"}}
	{"specversion":"1.0","id":"5ee0f6c1-3173-42bc-b142-2c3b815dcf46","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4922/.minikube"}}
	{"specversion":"1.0","id":"cce0b1c6-4586-422f-961f-29f205ccad5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0a91ce49-a37b-4825-b695-ebd4b24b27e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3a027650-bef5-4c42-ae52-eedac9e40b8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-224062" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-224062
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (95.21s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-179548 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-179548 --driver=kvm2  --container-runtime=containerd: (46.15516519s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-182490 --driver=kvm2  --container-runtime=containerd
E0314 00:12:14.075786   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/functional-022391/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-182490 --driver=kvm2  --container-runtime=containerd: (46.582969466s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-179548
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-182490
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-182490" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-182490
helpers_test.go:175: Cleaning up "first-179548" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-179548
--- PASS: TestMinikubeProfile (95.21s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (34.28s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-472800 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0314 00:12:35.865107   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-472800 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (33.279989795s)
--- PASS: TestMountStart/serial/StartWithMountFirst (34.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-472800 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-472800 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (31.28s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-488695 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-488695 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (30.27718748s)
--- PASS: TestMountStart/serial/StartWithMountSecond (31.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-488695 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-488695 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.88s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-472800 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-488695 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-488695 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.69s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-488695
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-488695: (1.690847023s)
--- PASS: TestMountStart/serial/Stop (1.69s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.74s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-488695
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-488695: (23.737881367s)
--- PASS: TestMountStart/serial/RestartStopped (24.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-488695 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-488695 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (110.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-325098 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0314 00:14:32.820444   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-325098 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m50.438013118s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (110.86s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-325098 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-325098 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-325098 -- rollout status deployment/busybox: (4.771834374s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-325098 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-325098 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-325098 -- exec busybox-5b5d89c9d6-2rt88 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-325098 -- exec busybox-5b5d89c9d6-7q2s6 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-325098 -- exec busybox-5b5d89c9d6-2rt88 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-325098 -- exec busybox-5b5d89c9d6-7q2s6 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-325098 -- exec busybox-5b5d89c9d6-2rt88 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-325098 -- exec busybox-5b5d89c9d6-7q2s6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.69s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-325098 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-325098 -- exec busybox-5b5d89c9d6-2rt88 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-325098 -- exec busybox-5b5d89c9d6-2rt88 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-325098 -- exec busybox-5b5d89c9d6-7q2s6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-325098 -- exec busybox-5b5d89c9d6-7q2s6 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (49.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-325098 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-325098 -v 3 --alsologtostderr: (48.605600008s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (49.18s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-325098 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 cp testdata/cp-test.txt multinode-325098:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 ssh -n multinode-325098 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 cp multinode-325098:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1449282108/001/cp-test_multinode-325098.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 ssh -n multinode-325098 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 cp multinode-325098:/home/docker/cp-test.txt multinode-325098-m02:/home/docker/cp-test_multinode-325098_multinode-325098-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 ssh -n multinode-325098 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 ssh -n multinode-325098-m02 "sudo cat /home/docker/cp-test_multinode-325098_multinode-325098-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 cp multinode-325098:/home/docker/cp-test.txt multinode-325098-m03:/home/docker/cp-test_multinode-325098_multinode-325098-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 ssh -n multinode-325098 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 ssh -n multinode-325098-m03 "sudo cat /home/docker/cp-test_multinode-325098_multinode-325098-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 cp testdata/cp-test.txt multinode-325098-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 ssh -n multinode-325098-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 cp multinode-325098-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1449282108/001/cp-test_multinode-325098-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 ssh -n multinode-325098-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 cp multinode-325098-m02:/home/docker/cp-test.txt multinode-325098:/home/docker/cp-test_multinode-325098-m02_multinode-325098.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 ssh -n multinode-325098-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 ssh -n multinode-325098 "sudo cat /home/docker/cp-test_multinode-325098-m02_multinode-325098.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 cp multinode-325098-m02:/home/docker/cp-test.txt multinode-325098-m03:/home/docker/cp-test_multinode-325098-m02_multinode-325098-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 ssh -n multinode-325098-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 ssh -n multinode-325098-m03 "sudo cat /home/docker/cp-test_multinode-325098-m02_multinode-325098-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 cp testdata/cp-test.txt multinode-325098-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 ssh -n multinode-325098-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 cp multinode-325098-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1449282108/001/cp-test_multinode-325098-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 ssh -n multinode-325098-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 cp multinode-325098-m03:/home/docker/cp-test.txt multinode-325098:/home/docker/cp-test_multinode-325098-m03_multinode-325098.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 ssh -n multinode-325098-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 ssh -n multinode-325098 "sudo cat /home/docker/cp-test_multinode-325098-m03_multinode-325098.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 cp multinode-325098-m03:/home/docker/cp-test.txt multinode-325098-m02:/home/docker/cp-test_multinode-325098-m03_multinode-325098-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 ssh -n multinode-325098-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 ssh -n multinode-325098-m02 "sudo cat /home/docker/cp-test_multinode-325098-m03_multinode-325098-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.44s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-325098 node stop m03: (1.433023013s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-325098 status: exit status 7 (444.942895ms)

                                                
                                                
-- stdout --
	multinode-325098
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-325098-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-325098-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-325098 status --alsologtostderr: exit status 7 (435.915574ms)

                                                
                                                
-- stdout --
	multinode-325098
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-325098-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-325098-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 00:17:02.517322   36082 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:17:02.517869   36082 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:17:02.517889   36082 out.go:304] Setting ErrFile to fd 2...
	I0314 00:17:02.517897   36082 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:17:02.518331   36082 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4922/.minikube/bin
	I0314 00:17:02.518846   36082 out.go:298] Setting JSON to false
	I0314 00:17:02.518881   36082 mustload.go:65] Loading cluster: multinode-325098
	I0314 00:17:02.518996   36082 notify.go:220] Checking for updates...
	I0314 00:17:02.519301   36082 config.go:182] Loaded profile config "multinode-325098": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0314 00:17:02.519318   36082 status.go:255] checking status of multinode-325098 ...
	I0314 00:17:02.519714   36082 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0314 00:17:02.519779   36082 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:17:02.534883   36082 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43627
	I0314 00:17:02.535302   36082 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:17:02.535788   36082 main.go:141] libmachine: Using API Version  1
	I0314 00:17:02.535815   36082 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:17:02.536238   36082 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:17:02.536455   36082 main.go:141] libmachine: (multinode-325098) Calling .GetState
	I0314 00:17:02.538054   36082 status.go:330] multinode-325098 host status = "Running" (err=<nil>)
	I0314 00:17:02.538076   36082 host.go:66] Checking if "multinode-325098" exists ...
	I0314 00:17:02.538480   36082 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0314 00:17:02.538540   36082 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:17:02.552617   36082 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33027
	I0314 00:17:02.552956   36082 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:17:02.553321   36082 main.go:141] libmachine: Using API Version  1
	I0314 00:17:02.553341   36082 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:17:02.553612   36082 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:17:02.553784   36082 main.go:141] libmachine: (multinode-325098) Calling .GetIP
	I0314 00:17:02.556573   36082 main.go:141] libmachine: (multinode-325098) DBG | domain multinode-325098 has defined MAC address 52:54:00:2d:d6:d6 in network mk-multinode-325098
	I0314 00:17:02.556966   36082 main.go:141] libmachine: (multinode-325098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:d6:d6", ip: ""} in network mk-multinode-325098: {Iface:virbr1 ExpiryTime:2024-03-14 01:14:21 +0000 UTC Type:0 Mac:52:54:00:2d:d6:d6 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-325098 Clientid:01:52:54:00:2d:d6:d6}
	I0314 00:17:02.556996   36082 main.go:141] libmachine: (multinode-325098) DBG | domain multinode-325098 has defined IP address 192.168.39.57 and MAC address 52:54:00:2d:d6:d6 in network mk-multinode-325098
	I0314 00:17:02.557111   36082 host.go:66] Checking if "multinode-325098" exists ...
	I0314 00:17:02.557420   36082 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0314 00:17:02.557463   36082 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:17:02.571636   36082 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35231
	I0314 00:17:02.572011   36082 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:17:02.572473   36082 main.go:141] libmachine: Using API Version  1
	I0314 00:17:02.572491   36082 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:17:02.572797   36082 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:17:02.572976   36082 main.go:141] libmachine: (multinode-325098) Calling .DriverName
	I0314 00:17:02.573170   36082 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 00:17:02.573202   36082 main.go:141] libmachine: (multinode-325098) Calling .GetSSHHostname
	I0314 00:17:02.575786   36082 main.go:141] libmachine: (multinode-325098) DBG | domain multinode-325098 has defined MAC address 52:54:00:2d:d6:d6 in network mk-multinode-325098
	I0314 00:17:02.576198   36082 main.go:141] libmachine: (multinode-325098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:d6:d6", ip: ""} in network mk-multinode-325098: {Iface:virbr1 ExpiryTime:2024-03-14 01:14:21 +0000 UTC Type:0 Mac:52:54:00:2d:d6:d6 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-325098 Clientid:01:52:54:00:2d:d6:d6}
	I0314 00:17:02.576220   36082 main.go:141] libmachine: (multinode-325098) DBG | domain multinode-325098 has defined IP address 192.168.39.57 and MAC address 52:54:00:2d:d6:d6 in network mk-multinode-325098
	I0314 00:17:02.576378   36082 main.go:141] libmachine: (multinode-325098) Calling .GetSSHPort
	I0314 00:17:02.576540   36082 main.go:141] libmachine: (multinode-325098) Calling .GetSSHKeyPath
	I0314 00:17:02.576686   36082 main.go:141] libmachine: (multinode-325098) Calling .GetSSHUsername
	I0314 00:17:02.576810   36082 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4922/.minikube/machines/multinode-325098/id_rsa Username:docker}
	I0314 00:17:02.664415   36082 ssh_runner.go:195] Run: systemctl --version
	I0314 00:17:02.671688   36082 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 00:17:02.687995   36082 kubeconfig.go:125] found "multinode-325098" server: "https://192.168.39.57:8443"
	I0314 00:17:02.688019   36082 api_server.go:166] Checking apiserver status ...
	I0314 00:17:02.688053   36082 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:17:02.701694   36082 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1110/cgroup
	W0314 00:17:02.711353   36082 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1110/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0314 00:17:02.711385   36082 ssh_runner.go:195] Run: ls
	I0314 00:17:02.716187   36082 api_server.go:253] Checking apiserver healthz at https://192.168.39.57:8443/healthz ...
	I0314 00:17:02.722034   36082 api_server.go:279] https://192.168.39.57:8443/healthz returned 200:
	ok
	I0314 00:17:02.722050   36082 status.go:422] multinode-325098 apiserver status = Running (err=<nil>)
	I0314 00:17:02.722059   36082 status.go:257] multinode-325098 status: &{Name:multinode-325098 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 00:17:02.722074   36082 status.go:255] checking status of multinode-325098-m02 ...
	I0314 00:17:02.722340   36082 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0314 00:17:02.722369   36082 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:17:02.737379   36082 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43137
	I0314 00:17:02.737778   36082 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:17:02.738221   36082 main.go:141] libmachine: Using API Version  1
	I0314 00:17:02.738240   36082 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:17:02.738557   36082 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:17:02.738754   36082 main.go:141] libmachine: (multinode-325098-m02) Calling .GetState
	I0314 00:17:02.740240   36082 status.go:330] multinode-325098-m02 host status = "Running" (err=<nil>)
	I0314 00:17:02.740257   36082 host.go:66] Checking if "multinode-325098-m02" exists ...
	I0314 00:17:02.740548   36082 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0314 00:17:02.740579   36082 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:17:02.754685   36082 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38941
	I0314 00:17:02.755109   36082 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:17:02.755537   36082 main.go:141] libmachine: Using API Version  1
	I0314 00:17:02.755559   36082 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:17:02.755901   36082 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:17:02.756086   36082 main.go:141] libmachine: (multinode-325098-m02) Calling .GetIP
	I0314 00:17:02.758363   36082 main.go:141] libmachine: (multinode-325098-m02) DBG | domain multinode-325098-m02 has defined MAC address 52:54:00:55:c6:ae in network mk-multinode-325098
	I0314 00:17:02.758776   36082 main.go:141] libmachine: (multinode-325098-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c6:ae", ip: ""} in network mk-multinode-325098: {Iface:virbr1 ExpiryTime:2024-03-14 01:15:27 +0000 UTC Type:0 Mac:52:54:00:55:c6:ae Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:multinode-325098-m02 Clientid:01:52:54:00:55:c6:ae}
	I0314 00:17:02.758816   36082 main.go:141] libmachine: (multinode-325098-m02) DBG | domain multinode-325098-m02 has defined IP address 192.168.39.13 and MAC address 52:54:00:55:c6:ae in network mk-multinode-325098
	I0314 00:17:02.758940   36082 host.go:66] Checking if "multinode-325098-m02" exists ...
	I0314 00:17:02.759299   36082 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0314 00:17:02.759333   36082 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:17:02.773213   36082 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38987
	I0314 00:17:02.773547   36082 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:17:02.773996   36082 main.go:141] libmachine: Using API Version  1
	I0314 00:17:02.774018   36082 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:17:02.774382   36082 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:17:02.774555   36082 main.go:141] libmachine: (multinode-325098-m02) Calling .DriverName
	I0314 00:17:02.774762   36082 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 00:17:02.774783   36082 main.go:141] libmachine: (multinode-325098-m02) Calling .GetSSHHostname
	I0314 00:17:02.780447   36082 main.go:141] libmachine: (multinode-325098-m02) DBG | domain multinode-325098-m02 has defined MAC address 52:54:00:55:c6:ae in network mk-multinode-325098
	I0314 00:17:02.780820   36082 main.go:141] libmachine: (multinode-325098-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c6:ae", ip: ""} in network mk-multinode-325098: {Iface:virbr1 ExpiryTime:2024-03-14 01:15:27 +0000 UTC Type:0 Mac:52:54:00:55:c6:ae Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:multinode-325098-m02 Clientid:01:52:54:00:55:c6:ae}
	I0314 00:17:02.780850   36082 main.go:141] libmachine: (multinode-325098-m02) DBG | domain multinode-325098-m02 has defined IP address 192.168.39.13 and MAC address 52:54:00:55:c6:ae in network mk-multinode-325098
	I0314 00:17:02.780957   36082 main.go:141] libmachine: (multinode-325098-m02) Calling .GetSSHPort
	I0314 00:17:02.781129   36082 main.go:141] libmachine: (multinode-325098-m02) Calling .GetSSHKeyPath
	I0314 00:17:02.781274   36082 main.go:141] libmachine: (multinode-325098-m02) Calling .GetSSHUsername
	I0314 00:17:02.781405   36082 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18375-4922/.minikube/machines/multinode-325098-m02/id_rsa Username:docker}
	I0314 00:17:02.863659   36082 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 00:17:02.881275   36082 status.go:257] multinode-325098-m02 status: &{Name:multinode-325098-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0314 00:17:02.881304   36082 status.go:255] checking status of multinode-325098-m03 ...
	I0314 00:17:02.881597   36082 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0314 00:17:02.881629   36082 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:17:02.897253   36082 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42205
	I0314 00:17:02.897725   36082 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:17:02.898179   36082 main.go:141] libmachine: Using API Version  1
	I0314 00:17:02.898200   36082 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:17:02.898480   36082 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:17:02.898695   36082 main.go:141] libmachine: (multinode-325098-m03) Calling .GetState
	I0314 00:17:02.900018   36082 status.go:330] multinode-325098-m03 host status = "Stopped" (err=<nil>)
	I0314 00:17:02.900034   36082 status.go:343] host is not running, skipping remaining checks
	I0314 00:17:02.900042   36082 status.go:257] multinode-325098-m03 status: &{Name:multinode-325098-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.31s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (26.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 node start m03 -v=7 --alsologtostderr
E0314 00:17:14.076050   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/functional-022391/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-325098 node start m03 -v=7 --alsologtostderr: (26.090819987s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (26.72s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (296.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-325098
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-325098
E0314 00:19:32.820557   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt: no such file or directory
E0314 00:20:17.121416   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/functional-022391/client.crt: no such file or directory
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-325098: (3m5.39657699s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-325098 --wait=true -v=8 --alsologtostderr
E0314 00:22:14.075725   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/functional-022391/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-325098 --wait=true -v=8 --alsologtostderr: (1m51.337030026s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-325098
--- PASS: TestMultiNode/serial/RestartKeepsNodes (296.85s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-325098 node delete m03: (1.820617447s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.33s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (184.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 stop
E0314 00:24:32.820831   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt: no such file or directory
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-325098 stop: (3m3.920863742s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-325098 status: exit status 7 (91.253812ms)

                                                
                                                
-- stdout --
	multinode-325098
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-325098-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-325098 status --alsologtostderr: exit status 7 (92.133148ms)

                                                
                                                
-- stdout --
	multinode-325098
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-325098-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 00:25:32.873642   38660 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:25:32.873898   38660 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:25:32.873909   38660 out.go:304] Setting ErrFile to fd 2...
	I0314 00:25:32.873914   38660 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:25:32.874105   38660 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4922/.minikube/bin
	I0314 00:25:32.874272   38660 out.go:298] Setting JSON to false
	I0314 00:25:32.874301   38660 mustload.go:65] Loading cluster: multinode-325098
	I0314 00:25:32.874354   38660 notify.go:220] Checking for updates...
	I0314 00:25:32.874817   38660 config.go:182] Loaded profile config "multinode-325098": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0314 00:25:32.874837   38660 status.go:255] checking status of multinode-325098 ...
	I0314 00:25:32.875340   38660 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0314 00:25:32.875394   38660 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:25:32.890822   38660 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42545
	I0314 00:25:32.891224   38660 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:25:32.891773   38660 main.go:141] libmachine: Using API Version  1
	I0314 00:25:32.891800   38660 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:25:32.892157   38660 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:25:32.892331   38660 main.go:141] libmachine: (multinode-325098) Calling .GetState
	I0314 00:25:32.893943   38660 status.go:330] multinode-325098 host status = "Stopped" (err=<nil>)
	I0314 00:25:32.893958   38660 status.go:343] host is not running, skipping remaining checks
	I0314 00:25:32.893966   38660 status.go:257] multinode-325098 status: &{Name:multinode-325098 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 00:25:32.894002   38660 status.go:255] checking status of multinode-325098-m02 ...
	I0314 00:25:32.894408   38660 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0314 00:25:32.894451   38660 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 00:25:32.908259   38660 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43367
	I0314 00:25:32.908607   38660 main.go:141] libmachine: () Calling .GetVersion
	I0314 00:25:32.908973   38660 main.go:141] libmachine: Using API Version  1
	I0314 00:25:32.908992   38660 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 00:25:32.909269   38660 main.go:141] libmachine: () Calling .GetMachineName
	I0314 00:25:32.909427   38660 main.go:141] libmachine: (multinode-325098-m02) Calling .GetState
	I0314 00:25:32.910902   38660 status.go:330] multinode-325098-m02 host status = "Stopped" (err=<nil>)
	I0314 00:25:32.910918   38660 status.go:343] host is not running, skipping remaining checks
	I0314 00:25:32.910925   38660 status.go:257] multinode-325098-m02 status: &{Name:multinode-325098-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (184.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (80.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-325098 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-325098 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m20.300203569s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-325098 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (80.84s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-325098
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-325098-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-325098-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (70.083385ms)

                                                
                                                
-- stdout --
	* [multinode-325098-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18375-4922/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4922/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-325098-m02' is duplicated with machine name 'multinode-325098-m02' in profile 'multinode-325098'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-325098-m03 --driver=kvm2  --container-runtime=containerd
E0314 00:27:14.075848   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/functional-022391/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-325098-m03 --driver=kvm2  --container-runtime=containerd: (46.148088716s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-325098
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-325098: exit status 80 (227.704076ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-325098 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-325098-m03 already exists in multinode-325098-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-325098-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.48s)

                                                
                                    
x
+
TestPreload (349.1s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-925602 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0314 00:29:15.865966   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt: no such file or directory
E0314 00:29:32.820611   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-925602 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (3m5.030774038s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-925602 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-925602 image pull gcr.io/k8s-minikube/busybox: (3.012252256s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-925602
E0314 00:32:14.076133   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/functional-022391/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-925602: (1m31.713677196s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-925602 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-925602 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (1m8.017754135s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-925602 image list
helpers_test.go:175: Cleaning up "test-preload-925602" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-925602
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-925602: (1.105142532s)
--- PASS: TestPreload (349.10s)

                                                
                                    
x
+
TestScheduledStopUnix (119.73s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-095969 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-095969 --memory=2048 --driver=kvm2  --container-runtime=containerd: (47.95152357s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-095969 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-095969 -n scheduled-stop-095969
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-095969 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-095969 --cancel-scheduled
E0314 00:34:32.819969   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-095969 -n scheduled-stop-095969
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-095969
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-095969 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-095969
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-095969: exit status 7 (74.032344ms)

                                                
                                                
-- stdout --
	scheduled-stop-095969
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-095969 -n scheduled-stop-095969
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-095969 -n scheduled-stop-095969: exit status 7 (83.614163ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-095969" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-095969
--- PASS: TestScheduledStopUnix (119.73s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (192.33s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
E0314 00:37:14.076223   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/functional-022391/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1011427773 start -p running-upgrade-630626 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1011427773 start -p running-upgrade-630626 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m21.62982979s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-630626 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-630626 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m46.253621355s)
helpers_test.go:175: Cleaning up "running-upgrade-630626" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-630626
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-630626: (1.278124851s)
--- PASS: TestRunningBinaryUpgrade (192.33s)

                                                
                                    
x
+
TestKubernetesUpgrade (257.48s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-602847 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-602847 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (2m3.129100157s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-602847
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-602847: (2.327453993s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-602847 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-602847 status --format={{.Host}}: exit status 7 (101.485366ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-602847 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-602847 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m51.816380443s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-602847 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-602847 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-602847 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (109.645059ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-602847] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18375-4922/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4922/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-602847
	    minikube start -p kubernetes-upgrade-602847 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6028472 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-602847 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-602847 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-602847 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (18.623305977s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-602847" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-602847
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-602847: (1.295096955s)
--- PASS: TestKubernetesUpgrade (257.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-167021 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-167021 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (98.18321ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-167021] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18375-4922/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4922/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (96.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-167021 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-167021 --driver=kvm2  --container-runtime=containerd: (1m35.947476924s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-167021 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (96.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-742241 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-742241 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (109.585674ms)

                                                
                                                
-- stdout --
	* [false-742241] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18375-4922/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4922/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 00:35:35.087337   42867 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:35:35.087577   42867 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:35:35.087586   42867 out.go:304] Setting ErrFile to fd 2...
	I0314 00:35:35.087590   42867 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:35:35.087775   42867 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-4922/.minikube/bin
	I0314 00:35:35.088319   42867 out.go:298] Setting JSON to false
	I0314 00:35:35.089215   42867 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":4679,"bootTime":1710371856,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 00:35:35.089280   42867 start.go:139] virtualization: kvm guest
	I0314 00:35:35.091440   42867 out.go:177] * [false-742241] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 00:35:35.092772   42867 out.go:177]   - MINIKUBE_LOCATION=18375
	I0314 00:35:35.093904   42867 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 00:35:35.092792   42867 notify.go:220] Checking for updates...
	I0314 00:35:35.096604   42867 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18375-4922/kubeconfig
	I0314 00:35:35.098065   42867 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-4922/.minikube
	I0314 00:35:35.099577   42867 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 00:35:35.101040   42867 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 00:35:35.102915   42867 config.go:182] Loaded profile config "NoKubernetes-167021": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0314 00:35:35.103078   42867 config.go:182] Loaded profile config "force-systemd-env-188464": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0314 00:35:35.103203   42867 config.go:182] Loaded profile config "offline-containerd-147572": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0314 00:35:35.103325   42867 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 00:35:35.137540   42867 out.go:177] * Using the kvm2 driver based on user configuration
	I0314 00:35:35.138908   42867 start.go:297] selected driver: kvm2
	I0314 00:35:35.138931   42867 start.go:901] validating driver "kvm2" against <nil>
	I0314 00:35:35.138948   42867 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 00:35:35.141000   42867 out.go:177] 
	W0314 00:35:35.142431   42867 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0314 00:35:35.143690   42867 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-742241 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-742241

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-742241

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-742241

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-742241

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-742241

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-742241

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-742241

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-742241

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-742241

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-742241

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742241"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742241"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742241"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-742241

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742241"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742241"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-742241" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-742241" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-742241" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-742241" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-742241" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-742241" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-742241" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-742241" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742241"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742241"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742241"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742241"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742241"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-742241" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-742241" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-742241" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742241"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742241"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742241"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742241"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742241"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-742241

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742241"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742241"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742241"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742241"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742241"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742241"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742241"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742241"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742241"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742241"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742241"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742241"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742241"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742241"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742241"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742241"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742241"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742241"

                                                
                                                
----------------------- debugLogs end: false-742241 [took: 2.978323868s] --------------------------------
helpers_test.go:175: Cleaning up "false-742241" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-742241
--- PASS: TestNetworkPlugins/group/false (3.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (213.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1816048255 start -p stopped-upgrade-837196 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E0314 00:36:57.121983   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/functional-022391/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1816048255 start -p stopped-upgrade-837196 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m27.44082959s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1816048255 -p stopped-upgrade-837196 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1816048255 -p stopped-upgrade-837196 stop: (2.395034382s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-837196 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-837196 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (2m3.459553058s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (213.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-167021 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-167021 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (15.71737086s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-167021 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-167021 status -o json: exit status 2 (257.338572ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-167021","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-167021
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (70.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-167021 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-167021 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (1m10.202297102s)
--- PASS: TestNoKubernetes/serial/Start (70.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-167021 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-167021 "sudo systemctl is-active --quiet service kubelet": exit status 1 (229.537144ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (6.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (3.502497657s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.083050802s)
--- PASS: TestNoKubernetes/serial/ProfileList (6.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-167021
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-167021: (1.477583342s)
--- PASS: TestNoKubernetes/serial/Stop (1.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (57.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-167021 --driver=kvm2  --container-runtime=containerd
E0314 00:39:32.820113   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-167021 --driver=kvm2  --container-runtime=containerd: (57.107619053s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (57.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-167021 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-167021 "sudo systemctl is-active --quiet service kubelet": exit status 1 (235.648952ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestPause/serial/Start (101.21s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-145743 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-145743 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m41.213579971s)
--- PASS: TestPause/serial/Start (101.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-837196
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-837196: (1.159399782s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (91.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-742241 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-742241 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m31.367418307s)
--- PASS: TestNetworkPlugins/group/auto/Start (91.37s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (72.54s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-145743 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-145743 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m12.516107603s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (72.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (113.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-742241 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-742241 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m53.275372665s)
--- PASS: TestNetworkPlugins/group/flannel/Start (113.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-742241 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-742241 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rs855" [aca8f0f1-abbf-4306-b2c6-7a0f71113fe0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-rs855" [aca8f0f1-abbf-4306-b2c6-7a0f71113fe0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004741568s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.25s)

                                                
                                    
x
+
TestPause/serial/Pause (0.93s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-145743 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.93s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-145743 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-145743 --output=json --layout=cluster: exit status 2 (294.871235ms)

                                                
                                                
-- stdout --
	{"Name":"pause-145743","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-145743","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.81s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-145743 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.81s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.91s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-145743 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.91s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.19s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-145743 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-145743 --alsologtostderr -v=5: (1.188064844s)
--- PASS: TestPause/serial/DeletePaused (1.19s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.83s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (68.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-742241 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-742241 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m8.212145853s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (68.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-742241 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-742241 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-742241 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (113.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-742241 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-742241 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m53.641743543s)
--- PASS: TestNetworkPlugins/group/bridge/Start (113.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-mkrfw" [218907e2-6e3a-4a8a-86bb-52de0e4cb33d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004921887s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-742241 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-742241 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-g6hsp" [fb341092-5e18-4d04-a927-b1d63a16be4b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-g6hsp" [fb341092-5e18-4d04-a927-b1d63a16be4b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.005304906s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-742241 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-742241 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-742241 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-742241 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-742241 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zs7q9" [ded55662-76e4-41a5-9823-6b53985d294a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-zs7q9" [ded55662-76e4-41a5-9823-6b53985d294a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004289337s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (104.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-742241 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-742241 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m44.442912888s)
--- PASS: TestNetworkPlugins/group/calico/Start (104.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-742241 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-742241 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-742241 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (76.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-742241 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
E0314 00:44:32.819843   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-742241 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m16.850442873s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (76.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-742241 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-742241 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-nwzs6" [21908525-f675-44dd-a5ef-93c151dd49d3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-nwzs6" [21908525-f675-44dd-a5ef-93c151dd49d3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.005461745s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-742241 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-742241 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-742241 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (95.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-742241 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-742241 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m35.544215464s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (95.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (214.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-775403 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-775403 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (3m34.353957759s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (214.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-wrcdb" [8ec8b751-7570-4f1e-9b3e-c77f3d21fb0f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.007711708s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-fnr69" [7fb185a0-2ac0-423b-bc5f-951b8c87d47e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006769609s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-742241 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-742241 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4dfk4" [3d621c2a-b872-40de-99f3-bcf15b16cbff] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4dfk4" [3d621c2a-b872-40de-99f3-bcf15b16cbff] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004006155s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-742241 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-742241 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-cvnlp" [62608082-f5bd-43bd-bbcb-0e2fdbf9797a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-cvnlp" [62608082-f5bd-43bd-bbcb-0e2fdbf9797a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.00677203s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-742241 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-742241 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-742241 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-742241 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-742241 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-742241 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (223.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-008137 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-008137 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (3m43.937804937s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (223.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (139.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-746433 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-746433 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (2m19.702335907s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (139.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-742241 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-742241 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lxfgl" [dc301c79-aa9b-4a4a-b217-fe7f5decd30b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-lxfgl" [dc301c79-aa9b-4a4a-b217-fe7f5decd30b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004361876s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-742241 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-742241 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-742241 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)
E0314 00:56:06.774161   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/calico-742241/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (99.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-910271 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
E0314 00:47:34.179164   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/auto-742241/client.crt: no such file or directory
E0314 00:47:34.185236   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/auto-742241/client.crt: no such file or directory
E0314 00:47:34.195460   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/auto-742241/client.crt: no such file or directory
E0314 00:47:34.215769   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/auto-742241/client.crt: no such file or directory
E0314 00:47:34.256049   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/auto-742241/client.crt: no such file or directory
E0314 00:47:34.336401   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/auto-742241/client.crt: no such file or directory
E0314 00:47:34.496810   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/auto-742241/client.crt: no such file or directory
E0314 00:47:34.816989   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/auto-742241/client.crt: no such file or directory
E0314 00:47:35.457580   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/auto-742241/client.crt: no such file or directory
E0314 00:47:36.737899   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/auto-742241/client.crt: no such file or directory
E0314 00:47:39.298174   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/auto-742241/client.crt: no such file or directory
E0314 00:47:44.418481   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/auto-742241/client.crt: no such file or directory
E0314 00:47:54.659566   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/auto-742241/client.crt: no such file or directory
E0314 00:48:15.140099   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/auto-742241/client.crt: no such file or directory
E0314 00:48:20.214490   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/flannel-742241/client.crt: no such file or directory
E0314 00:48:20.219751   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/flannel-742241/client.crt: no such file or directory
E0314 00:48:20.230016   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/flannel-742241/client.crt: no such file or directory
E0314 00:48:20.250252   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/flannel-742241/client.crt: no such file or directory
E0314 00:48:20.290560   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/flannel-742241/client.crt: no such file or directory
E0314 00:48:20.370862   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/flannel-742241/client.crt: no such file or directory
E0314 00:48:20.531313   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/flannel-742241/client.crt: no such file or directory
E0314 00:48:20.852433   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/flannel-742241/client.crt: no such file or directory
E0314 00:48:21.493500   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/flannel-742241/client.crt: no such file or directory
E0314 00:48:22.774258   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/flannel-742241/client.crt: no such file or directory
E0314 00:48:25.335095   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/flannel-742241/client.crt: no such file or directory
E0314 00:48:30.455357   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/flannel-742241/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-910271 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m39.709221096s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (99.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-746433 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [28c2fb35-e1e1-470f-9fc3-f16da5466908] Pending
helpers_test.go:344: "busybox" [28c2fb35-e1e1-470f-9fc3-f16da5466908] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [28c2fb35-e1e1-470f-9fc3-f16da5466908] Running
E0314 00:48:40.696251   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/flannel-742241/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004812784s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-746433 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-746433 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-746433 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.150714376s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-746433 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (92.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-746433 --alsologtostderr -v=3
E0314 00:48:49.371458   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/enable-default-cni-742241/client.crt: no such file or directory
E0314 00:48:49.376691   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/enable-default-cni-742241/client.crt: no such file or directory
E0314 00:48:49.387523   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/enable-default-cni-742241/client.crt: no such file or directory
E0314 00:48:49.407792   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/enable-default-cni-742241/client.crt: no such file or directory
E0314 00:48:49.448072   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/enable-default-cni-742241/client.crt: no such file or directory
E0314 00:48:49.529016   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/enable-default-cni-742241/client.crt: no such file or directory
E0314 00:48:49.690070   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/enable-default-cni-742241/client.crt: no such file or directory
E0314 00:48:50.010568   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/enable-default-cni-742241/client.crt: no such file or directory
E0314 00:48:50.650733   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/enable-default-cni-742241/client.crt: no such file or directory
E0314 00:48:51.931564   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/enable-default-cni-742241/client.crt: no such file or directory
E0314 00:48:54.492220   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/enable-default-cni-742241/client.crt: no such file or directory
E0314 00:48:56.100441   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/auto-742241/client.crt: no such file or directory
E0314 00:48:59.612731   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/enable-default-cni-742241/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-746433 --alsologtostderr -v=3: (1m32.52650681s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (92.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-775403 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [469411b6-523e-4e0f-b467-c5c30447fb6e] Pending
helpers_test.go:344: "busybox" [469411b6-523e-4e0f-b467-c5c30447fb6e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0314 00:49:01.176880   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/flannel-742241/client.crt: no such file or directory
helpers_test.go:344: "busybox" [469411b6-523e-4e0f-b467-c5c30447fb6e] Running
E0314 00:49:09.852939   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/enable-default-cni-742241/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004693917s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-775403 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-910271 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a2f71075-d118-4723-8c4e-1fa692ab2b14] Pending
helpers_test.go:344: "busybox" [a2f71075-d118-4723-8c4e-1fa692ab2b14] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a2f71075-d118-4723-8c4e-1fa692ab2b14] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003660249s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-910271 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-775403 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-775403 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (92.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-775403 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-775403 --alsologtostderr -v=3: (1m32.495108042s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (92.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-910271 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-910271 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.086633085s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-910271 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (92.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-910271 --alsologtostderr -v=3
E0314 00:49:30.333735   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/enable-default-cni-742241/client.crt: no such file or directory
E0314 00:49:32.820036   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt: no such file or directory
E0314 00:49:42.137328   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/flannel-742241/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-910271 --alsologtostderr -v=3: (1m32.468166255s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (92.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (12.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-008137 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e54ba707-9784-4216-b0c4-9873373cbae1] Pending
E0314 00:49:53.903390   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/bridge-742241/client.crt: no such file or directory
E0314 00:49:53.908646   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/bridge-742241/client.crt: no such file or directory
E0314 00:49:53.918939   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/bridge-742241/client.crt: no such file or directory
E0314 00:49:53.939204   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/bridge-742241/client.crt: no such file or directory
E0314 00:49:53.979457   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/bridge-742241/client.crt: no such file or directory
E0314 00:49:54.059949   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/bridge-742241/client.crt: no such file or directory
E0314 00:49:54.220545   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/bridge-742241/client.crt: no such file or directory
helpers_test.go:344: "busybox" [e54ba707-9784-4216-b0c4-9873373cbae1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0314 00:49:54.541344   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/bridge-742241/client.crt: no such file or directory
E0314 00:49:55.182516   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/bridge-742241/client.crt: no such file or directory
E0314 00:49:56.463145   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/bridge-742241/client.crt: no such file or directory
E0314 00:49:59.024047   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/bridge-742241/client.crt: no such file or directory
helpers_test.go:344: "busybox" [e54ba707-9784-4216-b0c4-9873373cbae1] Running
E0314 00:50:04.144204   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/bridge-742241/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 12.004909242s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-008137 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (12.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-008137 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-008137 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (92.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-008137 --alsologtostderr -v=3
E0314 00:50:11.294103   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/enable-default-cni-742241/client.crt: no such file or directory
E0314 00:50:14.385116   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/bridge-742241/client.crt: no such file or directory
E0314 00:50:18.021588   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/auto-742241/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-008137 --alsologtostderr -v=3: (1m32.492267456s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (92.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-746433 -n embed-certs-746433
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-746433 -n embed-certs-746433: exit status 7 (81.244829ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-746433 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (315.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-746433 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
E0314 00:50:33.997530   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/kindnet-742241/client.crt: no such file or directory
E0314 00:50:34.002789   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/kindnet-742241/client.crt: no such file or directory
E0314 00:50:34.013021   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/kindnet-742241/client.crt: no such file or directory
E0314 00:50:34.033365   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/kindnet-742241/client.crt: no such file or directory
E0314 00:50:34.073709   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/kindnet-742241/client.crt: no such file or directory
E0314 00:50:34.154061   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/kindnet-742241/client.crt: no such file or directory
E0314 00:50:34.314580   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/kindnet-742241/client.crt: no such file or directory
E0314 00:50:34.635025   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/kindnet-742241/client.crt: no such file or directory
E0314 00:50:34.865562   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/bridge-742241/client.crt: no such file or directory
E0314 00:50:35.276003   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/kindnet-742241/client.crt: no such file or directory
E0314 00:50:36.556475   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/kindnet-742241/client.crt: no such file or directory
E0314 00:50:39.089487   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/calico-742241/client.crt: no such file or directory
E0314 00:50:39.094744   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/calico-742241/client.crt: no such file or directory
E0314 00:50:39.104980   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/calico-742241/client.crt: no such file or directory
E0314 00:50:39.117168   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/kindnet-742241/client.crt: no such file or directory
E0314 00:50:39.125293   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/calico-742241/client.crt: no such file or directory
E0314 00:50:39.165584   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/calico-742241/client.crt: no such file or directory
E0314 00:50:39.246258   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/calico-742241/client.crt: no such file or directory
E0314 00:50:39.406931   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/calico-742241/client.crt: no such file or directory
E0314 00:50:39.727484   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/calico-742241/client.crt: no such file or directory
E0314 00:50:40.368548   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/calico-742241/client.crt: no such file or directory
E0314 00:50:41.649059   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/calico-742241/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-746433 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (5m14.776655558s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-746433 -n embed-certs-746433
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (315.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-775403 -n old-k8s-version-775403
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-775403 -n old-k8s-version-775403: exit status 7 (93.018033ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-775403 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (229.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-775403 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
E0314 00:50:44.209966   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/calico-742241/client.crt: no such file or directory
E0314 00:50:44.237334   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/kindnet-742241/client.crt: no such file or directory
E0314 00:50:49.330272   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/calico-742241/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-775403 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (3m48.761613489s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-775403 -n old-k8s-version-775403
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (229.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-910271 -n default-k8s-diff-port-910271
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-910271 -n default-k8s-diff-port-910271: exit status 7 (91.768474ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-910271 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (309.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-910271 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
E0314 00:50:54.478305   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/kindnet-742241/client.crt: no such file or directory
E0314 00:50:59.570950   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/calico-742241/client.crt: no such file or directory
E0314 00:51:04.057825   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/flannel-742241/client.crt: no such file or directory
E0314 00:51:14.958748   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/kindnet-742241/client.crt: no such file or directory
E0314 00:51:15.826559   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/bridge-742241/client.crt: no such file or directory
E0314 00:51:20.051938   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/calico-742241/client.crt: no such file or directory
E0314 00:51:33.214369   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/enable-default-cni-742241/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-910271 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (5m8.320806227s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-910271 -n default-k8s-diff-port-910271
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (309.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-008137 -n no-preload-008137
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-008137 -n no-preload-008137: exit status 7 (89.305886ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-008137 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (317.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-008137 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0314 00:51:55.919740   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/kindnet-742241/client.crt: no such file or directory
E0314 00:51:59.904154   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/custom-flannel-742241/client.crt: no such file or directory
E0314 00:51:59.909448   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/custom-flannel-742241/client.crt: no such file or directory
E0314 00:51:59.919753   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/custom-flannel-742241/client.crt: no such file or directory
E0314 00:51:59.940061   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/custom-flannel-742241/client.crt: no such file or directory
E0314 00:51:59.980867   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/custom-flannel-742241/client.crt: no such file or directory
E0314 00:52:00.061369   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/custom-flannel-742241/client.crt: no such file or directory
E0314 00:52:00.221952   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/custom-flannel-742241/client.crt: no such file or directory
E0314 00:52:00.542633   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/custom-flannel-742241/client.crt: no such file or directory
E0314 00:52:01.012893   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/calico-742241/client.crt: no such file or directory
E0314 00:52:01.182980   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/custom-flannel-742241/client.crt: no such file or directory
E0314 00:52:02.463689   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/custom-flannel-742241/client.crt: no such file or directory
E0314 00:52:05.024284   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/custom-flannel-742241/client.crt: no such file or directory
E0314 00:52:10.145235   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/custom-flannel-742241/client.crt: no such file or directory
E0314 00:52:14.075573   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/functional-022391/client.crt: no such file or directory
E0314 00:52:20.386015   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/custom-flannel-742241/client.crt: no such file or directory
E0314 00:52:34.179603   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/auto-742241/client.crt: no such file or directory
E0314 00:52:37.747665   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/bridge-742241/client.crt: no such file or directory
E0314 00:52:40.866326   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/custom-flannel-742241/client.crt: no such file or directory
E0314 00:53:01.861801   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/auto-742241/client.crt: no such file or directory
E0314 00:53:17.840673   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/kindnet-742241/client.crt: no such file or directory
E0314 00:53:20.213974   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/flannel-742241/client.crt: no such file or directory
E0314 00:53:21.827015   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/custom-flannel-742241/client.crt: no such file or directory
E0314 00:53:22.933062   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/calico-742241/client.crt: no such file or directory
E0314 00:53:37.122312   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/functional-022391/client.crt: no such file or directory
E0314 00:53:47.898934   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/flannel-742241/client.crt: no such file or directory
E0314 00:53:49.372119   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/enable-default-cni-742241/client.crt: no such file or directory
E0314 00:54:17.055443   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/enable-default-cni-742241/client.crt: no such file or directory
E0314 00:54:32.820459   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/addons-391283/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-008137 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (5m17.161225127s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-008137 -n no-preload-008137
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (317.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-ql2wb" [d83f204f-37e3-4554-ada3-488dfe446878] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004763496s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-ql2wb" [d83f204f-37e3-4554-ada3-488dfe446878] Running
E0314 00:54:43.747361   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/custom-flannel-742241/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004610003s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-775403 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-775403 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-775403 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-775403 -n old-k8s-version-775403
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-775403 -n old-k8s-version-775403: exit status 2 (258.574815ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-775403 -n old-k8s-version-775403
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-775403 -n old-k8s-version-775403: exit status 2 (257.160133ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-775403 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-775403 -n old-k8s-version-775403
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-775403 -n old-k8s-version-775403
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (61.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-325161 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0314 00:54:53.903746   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/bridge-742241/client.crt: no such file or directory
E0314 00:55:21.588000   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/bridge-742241/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-325161 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (1m1.872256055s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (61.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (20.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-58d22" [b41b088e-9d4a-427f-a85e-06b3bbcb85b5] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0314 00:55:33.997891   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/kindnet-742241/client.crt: no such file or directory
E0314 00:55:39.089538   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/calico-742241/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-58d22" [b41b088e-9d4a-427f-a85e-06b3bbcb85b5] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 20.009443387s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (20.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-325161 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-325161 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.193375589s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-325161 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-325161 --alsologtostderr -v=3: (3.327433453s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-58d22" [b41b088e-9d4a-427f-a85e-06b3bbcb85b5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006453798s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-746433 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-325161 -n newest-cni-325161
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-325161 -n newest-cni-325161: exit status 7 (85.049099ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-325161 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-325161 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-325161 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (37.681297825s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-325161 -n newest-cni-325161
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (38.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-746433 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-746433 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-746433 -n embed-certs-746433
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-746433 -n embed-certs-746433: exit status 2 (255.003644ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-746433 -n embed-certs-746433
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-746433 -n embed-certs-746433: exit status 2 (253.474902ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-746433 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-746433 -n embed-certs-746433
E0314 00:56:01.681480   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/kindnet-742241/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-746433 -n embed-certs-746433
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-clpk6" [b2e4180a-fa49-43ef-8526-265c3bfff94b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00591043s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-clpk6" [b2e4180a-fa49-43ef-8526-265c3bfff94b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005560949s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-910271 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-910271 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-910271 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-910271 -n default-k8s-diff-port-910271
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-910271 -n default-k8s-diff-port-910271: exit status 2 (248.395089ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-910271 -n default-k8s-diff-port-910271
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-910271 -n default-k8s-diff-port-910271: exit status 2 (245.323762ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-910271 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-910271 -n default-k8s-diff-port-910271
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-910271 -n default-k8s-diff-port-910271
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-325161 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-325161 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-325161 -n newest-cni-325161
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-325161 -n newest-cni-325161: exit status 2 (259.61372ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-325161 -n newest-cni-325161
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-325161 -n newest-cni-325161: exit status 2 (262.165669ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-325161 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-325161 -n newest-cni-325161
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-325161 -n newest-cni-325161
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-p4nrz" [f7391fec-427a-4ab3-9561-944f5594c068] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0314 00:56:59.904448   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/custom-flannel-742241/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-p4nrz" [f7391fec-427a-4ab3-9561-944f5594c068] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.005476253s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-p4nrz" [f7391fec-427a-4ab3-9561-944f5594c068] Running
E0314 00:57:14.076161   12346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-4922/.minikube/profiles/functional-022391/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006321249s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-008137 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-008137 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-008137 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-008137 -n no-preload-008137
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-008137 -n no-preload-008137: exit status 2 (242.038779ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-008137 -n no-preload-008137
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-008137 -n no-preload-008137: exit status 2 (246.986247ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-008137 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-008137 -n no-preload-008137
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-008137 -n no-preload-008137
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.70s)

                                                
                                    

Test skip (39/333)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.28.4/cached-images 0
15 TestDownloadOnly/v1.28.4/binaries 0
16 TestDownloadOnly/v1.28.4/kubectl 0
23 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
24 TestDownloadOnly/v1.29.0-rc.2/binaries 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
123 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
124 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
127 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
129 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
184 TestImageBuild 0
211 TestKicCustomNetwork 0
212 TestKicExistingNetwork 0
213 TestKicCustomSubnet 0
214 TestKicStaticIP 0
246 TestChangeNoneUser 0
249 TestScheduledStopWindows 0
251 TestSkaffold 0
253 TestInsufficientStorage 0
257 TestMissingContainerUpgrade 0
262 TestNetworkPlugins/group/kubenet 3.25
271 TestNetworkPlugins/group/cilium 3.46
286 TestStartStop/group/disable-driver-mounts 0.16
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-742241 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-742241

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-742241

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-742241

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-742241

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-742241

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-742241

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-742241

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-742241

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-742241

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-742241

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742241"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742241"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742241"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-742241

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742241"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742241"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-742241" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-742241" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-742241" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-742241" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-742241" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-742241" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-742241" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-742241" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742241"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742241"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742241"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742241"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742241"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-742241" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-742241" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-742241" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742241"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742241"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742241"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742241"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742241"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-742241

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742241"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742241"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742241"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742241"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742241"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742241"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742241"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742241"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742241"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742241"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742241"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742241"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742241"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742241"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742241"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742241"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742241"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742241"

                                                
                                                
----------------------- debugLogs end: kubenet-742241 [took: 3.1063704s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-742241" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-742241
--- SKIP: TestNetworkPlugins/group/kubenet (3.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-742241 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-742241

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-742241

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-742241

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-742241

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-742241

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-742241

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-742241

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-742241

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-742241

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-742241

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742241"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742241"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742241"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-742241

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742241"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742241"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-742241" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-742241" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-742241" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-742241" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-742241" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-742241" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-742241" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-742241" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742241"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742241"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742241"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742241"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742241"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-742241

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-742241

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-742241" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-742241" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-742241

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-742241

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-742241" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-742241" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-742241" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-742241" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-742241" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742241"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742241"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742241"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742241"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742241"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-742241

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742241"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742241"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742241"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742241"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742241"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742241"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742241"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742241"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742241"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742241"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742241"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742241"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742241"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742241"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742241"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742241"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742241"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-742241" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742241"

                                                
                                                
----------------------- debugLogs end: cilium-742241 [took: 3.31364164s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-742241" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-742241
--- SKIP: TestNetworkPlugins/group/cilium (3.46s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-292093" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-292093
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard