Test Report: KVM_Linux_containerd 19024

                    
                      79b1b42e4c1f52f497f2c052d5e760f5044cd55a:2024-06-04:34765
                    
                

Test fail (1/326)

Order failed test Duration
31 TestAddons/parallel/InspektorGadget 8.04
x
+
TestAddons/parallel/InspektorGadget (8.04s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-jhs4g" [2bf0e45f-6013-42a5-8493-4c104d94e665] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004379209s
addons_test.go:843: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-450158
addons_test.go:843: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-450158: exit status 11 (322.474269ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-06-04T21:36:06Z" level=error msg="stat /run/containerd/runc/k8s.io/737d91e0b7c78d97e5bd5411afff686bc870d35f328c749b55b36248f4422e26: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:844: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-450158" : exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-450158 -n addons-450158
helpers_test.go:244: <<< TestAddons/parallel/InspektorGadget FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/InspektorGadget]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-450158 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-450158 logs -n 25: (1.826594738s)
helpers_test.go:252: TestAddons/parallel/InspektorGadget logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-543481 | jenkins | v1.33.1 | 04 Jun 24 21:29 UTC |                     |
	|         | -p download-only-543481              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.33.1 | 04 Jun 24 21:30 UTC | 04 Jun 24 21:30 UTC |
	| delete  | -p download-only-543481              | download-only-543481 | jenkins | v1.33.1 | 04 Jun 24 21:30 UTC | 04 Jun 24 21:30 UTC |
	| start   | -o=json --download-only              | download-only-645945 | jenkins | v1.33.1 | 04 Jun 24 21:30 UTC |                     |
	|         | -p download-only-645945              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.33.1 | 04 Jun 24 21:31 UTC | 04 Jun 24 21:31 UTC |
	| delete  | -p download-only-645945              | download-only-645945 | jenkins | v1.33.1 | 04 Jun 24 21:31 UTC | 04 Jun 24 21:31 UTC |
	| delete  | -p download-only-543481              | download-only-543481 | jenkins | v1.33.1 | 04 Jun 24 21:31 UTC | 04 Jun 24 21:31 UTC |
	| delete  | -p download-only-645945              | download-only-645945 | jenkins | v1.33.1 | 04 Jun 24 21:31 UTC | 04 Jun 24 21:31 UTC |
	| start   | --download-only -p                   | binary-mirror-460204 | jenkins | v1.33.1 | 04 Jun 24 21:31 UTC |                     |
	|         | binary-mirror-460204                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:40673               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-460204              | binary-mirror-460204 | jenkins | v1.33.1 | 04 Jun 24 21:31 UTC | 04 Jun 24 21:31 UTC |
	| addons  | enable dashboard -p                  | addons-450158        | jenkins | v1.33.1 | 04 Jun 24 21:31 UTC |                     |
	|         | addons-450158                        |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-450158        | jenkins | v1.33.1 | 04 Jun 24 21:31 UTC |                     |
	|         | addons-450158                        |                      |         |         |                     |                     |
	| start   | -p addons-450158 --wait=true         | addons-450158        | jenkins | v1.33.1 | 04 Jun 24 21:31 UTC | 04 Jun 24 21:35 UTC |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                      |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-450158        | jenkins | v1.33.1 | 04 Jun 24 21:35 UTC | 04 Jun 24 21:35 UTC |
	|         | -p addons-450158                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-450158        | jenkins | v1.33.1 | 04 Jun 24 21:35 UTC | 04 Jun 24 21:35 UTC |
	|         | -p addons-450158                     |                      |         |         |                     |                     |
	| addons  | addons-450158 addons                 | addons-450158        | jenkins | v1.33.1 | 04 Jun 24 21:36 UTC | 04 Jun 24 21:36 UTC |
	|         | disable metrics-server               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| ip      | addons-450158 ip                     | addons-450158        | jenkins | v1.33.1 | 04 Jun 24 21:36 UTC | 04 Jun 24 21:36 UTC |
	| addons  | addons-450158 addons disable         | addons-450158        | jenkins | v1.33.1 | 04 Jun 24 21:36 UTC | 04 Jun 24 21:36 UTC |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-450158 addons disable         | addons-450158        | jenkins | v1.33.1 | 04 Jun 24 21:36 UTC | 04 Jun 24 21:36 UTC |
	|         | helm-tiller --alsologtostderr        |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-450158        | jenkins | v1.33.1 | 04 Jun 24 21:36 UTC |                     |
	|         | addons-450158                        |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/04 21:31:14
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0604 21:31:14.783990   14142 out.go:291] Setting OutFile to fd 1 ...
	I0604 21:31:14.784114   14142 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 21:31:14.784125   14142 out.go:304] Setting ErrFile to fd 2...
	I0604 21:31:14.784131   14142 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 21:31:14.784297   14142 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19024-5817/.minikube/bin
	I0604 21:31:14.784891   14142 out.go:298] Setting JSON to false
	I0604 21:31:14.785649   14142 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":816,"bootTime":1717535859,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0604 21:31:14.785698   14142 start.go:139] virtualization: kvm guest
	I0604 21:31:14.787959   14142 out.go:177] * [addons-450158] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0604 21:31:14.789384   14142 out.go:177]   - MINIKUBE_LOCATION=19024
	I0604 21:31:14.789442   14142 notify.go:220] Checking for updates...
	I0604 21:31:14.790934   14142 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 21:31:14.792245   14142 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19024-5817/kubeconfig
	I0604 21:31:14.793409   14142 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19024-5817/.minikube
	I0604 21:31:14.794628   14142 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0604 21:31:14.795834   14142 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0604 21:31:14.797143   14142 driver.go:392] Setting default libvirt URI to qemu:///system
	I0604 21:31:14.826828   14142 out.go:177] * Using the kvm2 driver based on user configuration
	I0604 21:31:14.828089   14142 start.go:297] selected driver: kvm2
	I0604 21:31:14.828098   14142 start.go:901] validating driver "kvm2" against <nil>
	I0604 21:31:14.828108   14142 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 21:31:14.828811   14142 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 21:31:14.828882   14142 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19024-5817/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0604 21:31:14.842966   14142 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0604 21:31:14.843021   14142 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0604 21:31:14.843218   14142 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 21:31:14.843279   14142 cni.go:84] Creating CNI manager for ""
	I0604 21:31:14.843291   14142 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0604 21:31:14.843298   14142 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0604 21:31:14.843349   14142 start.go:340] cluster config:
	{Name:addons-450158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-450158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
ontainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0604 21:31:14.843436   14142 iso.go:125] acquiring lock: {Name:mkda4cefdbcc254212dd1652a198fa2930d04a2a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 21:31:14.845148   14142 out.go:177] * Starting "addons-450158" primary control-plane node in "addons-450158" cluster
	I0604 21:31:14.846351   14142 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime containerd
	I0604 21:31:14.846373   14142 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19024-5817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-containerd-overlay2-amd64.tar.lz4
	I0604 21:31:14.846379   14142 cache.go:56] Caching tarball of preloaded images
	I0604 21:31:14.846446   14142 preload.go:173] Found /home/jenkins/minikube-integration/19024-5817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 21:31:14.846456   14142 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on containerd
	I0604 21:31:14.846708   14142 profile.go:143] Saving config to /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/config.json ...
	I0604 21:31:14.846724   14142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/config.json: {Name:mk89e7110b62b42039300e4f28571e180e31fee7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 21:31:14.846844   14142 start.go:360] acquireMachinesLock for addons-450158: {Name:mk6b7271bbdb2111ab3b1f8fb38dad0caa5e22d3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0604 21:31:14.846885   14142 start.go:364] duration metric: took 28.971µs to acquireMachinesLock for "addons-450158"
	I0604 21:31:14.846901   14142 start.go:93] Provisioning new machine with config: &{Name:addons-450158 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:addons-450158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0604 21:31:14.846948   14142 start.go:125] createHost starting for "" (driver="kvm2")
	I0604 21:31:14.848580   14142 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0604 21:31:14.848723   14142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:31:14.848762   14142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:31:14.861655   14142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40503
	I0604 21:31:14.862093   14142 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:31:14.862632   14142 main.go:141] libmachine: Using API Version  1
	I0604 21:31:14.862651   14142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:31:14.862984   14142 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:31:14.863139   14142 main.go:141] libmachine: (addons-450158) Calling .GetMachineName
	I0604 21:31:14.863294   14142 main.go:141] libmachine: (addons-450158) Calling .DriverName
	I0604 21:31:14.863449   14142 start.go:159] libmachine.API.Create for "addons-450158" (driver="kvm2")
	I0604 21:31:14.863473   14142 client.go:168] LocalClient.Create starting
	I0604 21:31:14.863503   14142 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19024-5817/.minikube/certs/ca.pem
	I0604 21:31:15.013268   14142 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19024-5817/.minikube/certs/cert.pem
	I0604 21:31:15.127675   14142 main.go:141] libmachine: Running pre-create checks...
	I0604 21:31:15.127698   14142 main.go:141] libmachine: (addons-450158) Calling .PreCreateCheck
	I0604 21:31:15.128180   14142 main.go:141] libmachine: (addons-450158) Calling .GetConfigRaw
	I0604 21:31:15.128642   14142 main.go:141] libmachine: Creating machine...
	I0604 21:31:15.128657   14142 main.go:141] libmachine: (addons-450158) Calling .Create
	I0604 21:31:15.128802   14142 main.go:141] libmachine: (addons-450158) Creating KVM machine...
	I0604 21:31:15.130018   14142 main.go:141] libmachine: (addons-450158) DBG | found existing default KVM network
	I0604 21:31:15.130699   14142 main.go:141] libmachine: (addons-450158) DBG | I0604 21:31:15.130569   14163 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0604 21:31:15.130719   14142 main.go:141] libmachine: (addons-450158) DBG | created network xml: 
	I0604 21:31:15.130732   14142 main.go:141] libmachine: (addons-450158) DBG | <network>
	I0604 21:31:15.130751   14142 main.go:141] libmachine: (addons-450158) DBG |   <name>mk-addons-450158</name>
	I0604 21:31:15.130765   14142 main.go:141] libmachine: (addons-450158) DBG |   <dns enable='no'/>
	I0604 21:31:15.130772   14142 main.go:141] libmachine: (addons-450158) DBG |   
	I0604 21:31:15.130779   14142 main.go:141] libmachine: (addons-450158) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0604 21:31:15.130786   14142 main.go:141] libmachine: (addons-450158) DBG |     <dhcp>
	I0604 21:31:15.130792   14142 main.go:141] libmachine: (addons-450158) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0604 21:31:15.130799   14142 main.go:141] libmachine: (addons-450158) DBG |     </dhcp>
	I0604 21:31:15.130804   14142 main.go:141] libmachine: (addons-450158) DBG |   </ip>
	I0604 21:31:15.130813   14142 main.go:141] libmachine: (addons-450158) DBG |   
	I0604 21:31:15.130823   14142 main.go:141] libmachine: (addons-450158) DBG | </network>
	I0604 21:31:15.130850   14142 main.go:141] libmachine: (addons-450158) DBG | 
	I0604 21:31:15.136027   14142 main.go:141] libmachine: (addons-450158) DBG | trying to create private KVM network mk-addons-450158 192.168.39.0/24...
	I0604 21:31:15.198692   14142 main.go:141] libmachine: (addons-450158) DBG | private KVM network mk-addons-450158 192.168.39.0/24 created
	I0604 21:31:15.198724   14142 main.go:141] libmachine: (addons-450158) Setting up store path in /home/jenkins/minikube-integration/19024-5817/.minikube/machines/addons-450158 ...
	I0604 21:31:15.198743   14142 main.go:141] libmachine: (addons-450158) DBG | I0604 21:31:15.198659   14163 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19024-5817/.minikube
	I0604 21:31:15.198807   14142 main.go:141] libmachine: (addons-450158) Building disk image from file:///home/jenkins/minikube-integration/19024-5817/.minikube/cache/iso/amd64/minikube-v1.33.1-1717518792-19024-amd64.iso
	I0604 21:31:15.198842   14142 main.go:141] libmachine: (addons-450158) Downloading /home/jenkins/minikube-integration/19024-5817/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19024-5817/.minikube/cache/iso/amd64/minikube-v1.33.1-1717518792-19024-amd64.iso...
	I0604 21:31:15.444138   14142 main.go:141] libmachine: (addons-450158) DBG | I0604 21:31:15.444038   14163 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19024-5817/.minikube/machines/addons-450158/id_rsa...
	I0604 21:31:15.621899   14142 main.go:141] libmachine: (addons-450158) DBG | I0604 21:31:15.621786   14163 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19024-5817/.minikube/machines/addons-450158/addons-450158.rawdisk...
	I0604 21:31:15.621933   14142 main.go:141] libmachine: (addons-450158) DBG | Writing magic tar header
	I0604 21:31:15.621947   14142 main.go:141] libmachine: (addons-450158) DBG | Writing SSH key tar header
	I0604 21:31:15.621959   14142 main.go:141] libmachine: (addons-450158) DBG | I0604 21:31:15.621897   14163 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19024-5817/.minikube/machines/addons-450158 ...
	I0604 21:31:15.622010   14142 main.go:141] libmachine: (addons-450158) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19024-5817/.minikube/machines/addons-450158
	I0604 21:31:15.622037   14142 main.go:141] libmachine: (addons-450158) Setting executable bit set on /home/jenkins/minikube-integration/19024-5817/.minikube/machines/addons-450158 (perms=drwx------)
	I0604 21:31:15.622050   14142 main.go:141] libmachine: (addons-450158) Setting executable bit set on /home/jenkins/minikube-integration/19024-5817/.minikube/machines (perms=drwxr-xr-x)
	I0604 21:31:15.622061   14142 main.go:141] libmachine: (addons-450158) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19024-5817/.minikube/machines
	I0604 21:31:15.622072   14142 main.go:141] libmachine: (addons-450158) Setting executable bit set on /home/jenkins/minikube-integration/19024-5817/.minikube (perms=drwxr-xr-x)
	I0604 21:31:15.622084   14142 main.go:141] libmachine: (addons-450158) Setting executable bit set on /home/jenkins/minikube-integration/19024-5817 (perms=drwxrwxr-x)
	I0604 21:31:15.622092   14142 main.go:141] libmachine: (addons-450158) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0604 21:31:15.622101   14142 main.go:141] libmachine: (addons-450158) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0604 21:31:15.622106   14142 main.go:141] libmachine: (addons-450158) Creating domain...
	I0604 21:31:15.622141   14142 main.go:141] libmachine: (addons-450158) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19024-5817/.minikube
	I0604 21:31:15.622170   14142 main.go:141] libmachine: (addons-450158) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19024-5817
	I0604 21:31:15.622184   14142 main.go:141] libmachine: (addons-450158) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0604 21:31:15.622195   14142 main.go:141] libmachine: (addons-450158) DBG | Checking permissions on dir: /home/jenkins
	I0604 21:31:15.622206   14142 main.go:141] libmachine: (addons-450158) DBG | Checking permissions on dir: /home
	I0604 21:31:15.622216   14142 main.go:141] libmachine: (addons-450158) DBG | Skipping /home - not owner
	I0604 21:31:15.623605   14142 main.go:141] libmachine: (addons-450158) define libvirt domain using xml: 
	I0604 21:31:15.623627   14142 main.go:141] libmachine: (addons-450158) <domain type='kvm'>
	I0604 21:31:15.623636   14142 main.go:141] libmachine: (addons-450158)   <name>addons-450158</name>
	I0604 21:31:15.623645   14142 main.go:141] libmachine: (addons-450158)   <memory unit='MiB'>4000</memory>
	I0604 21:31:15.623654   14142 main.go:141] libmachine: (addons-450158)   <vcpu>2</vcpu>
	I0604 21:31:15.623661   14142 main.go:141] libmachine: (addons-450158)   <features>
	I0604 21:31:15.623668   14142 main.go:141] libmachine: (addons-450158)     <acpi/>
	I0604 21:31:15.623674   14142 main.go:141] libmachine: (addons-450158)     <apic/>
	I0604 21:31:15.623686   14142 main.go:141] libmachine: (addons-450158)     <pae/>
	I0604 21:31:15.623692   14142 main.go:141] libmachine: (addons-450158)     
	I0604 21:31:15.623699   14142 main.go:141] libmachine: (addons-450158)   </features>
	I0604 21:31:15.623705   14142 main.go:141] libmachine: (addons-450158)   <cpu mode='host-passthrough'>
	I0604 21:31:15.623709   14142 main.go:141] libmachine: (addons-450158)   
	I0604 21:31:15.623714   14142 main.go:141] libmachine: (addons-450158)   </cpu>
	I0604 21:31:15.623723   14142 main.go:141] libmachine: (addons-450158)   <os>
	I0604 21:31:15.623727   14142 main.go:141] libmachine: (addons-450158)     <type>hvm</type>
	I0604 21:31:15.623732   14142 main.go:141] libmachine: (addons-450158)     <boot dev='cdrom'/>
	I0604 21:31:15.623737   14142 main.go:141] libmachine: (addons-450158)     <boot dev='hd'/>
	I0604 21:31:15.623743   14142 main.go:141] libmachine: (addons-450158)     <bootmenu enable='no'/>
	I0604 21:31:15.623756   14142 main.go:141] libmachine: (addons-450158)   </os>
	I0604 21:31:15.623768   14142 main.go:141] libmachine: (addons-450158)   <devices>
	I0604 21:31:15.623784   14142 main.go:141] libmachine: (addons-450158)     <disk type='file' device='cdrom'>
	I0604 21:31:15.623802   14142 main.go:141] libmachine: (addons-450158)       <source file='/home/jenkins/minikube-integration/19024-5817/.minikube/machines/addons-450158/boot2docker.iso'/>
	I0604 21:31:15.623809   14142 main.go:141] libmachine: (addons-450158)       <target dev='hdc' bus='scsi'/>
	I0604 21:31:15.623818   14142 main.go:141] libmachine: (addons-450158)       <readonly/>
	I0604 21:31:15.623824   14142 main.go:141] libmachine: (addons-450158)     </disk>
	I0604 21:31:15.623831   14142 main.go:141] libmachine: (addons-450158)     <disk type='file' device='disk'>
	I0604 21:31:15.623837   14142 main.go:141] libmachine: (addons-450158)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0604 21:31:15.623844   14142 main.go:141] libmachine: (addons-450158)       <source file='/home/jenkins/minikube-integration/19024-5817/.minikube/machines/addons-450158/addons-450158.rawdisk'/>
	I0604 21:31:15.623849   14142 main.go:141] libmachine: (addons-450158)       <target dev='hda' bus='virtio'/>
	I0604 21:31:15.623867   14142 main.go:141] libmachine: (addons-450158)     </disk>
	I0604 21:31:15.623889   14142 main.go:141] libmachine: (addons-450158)     <interface type='network'>
	I0604 21:31:15.623900   14142 main.go:141] libmachine: (addons-450158)       <source network='mk-addons-450158'/>
	I0604 21:31:15.623908   14142 main.go:141] libmachine: (addons-450158)       <model type='virtio'/>
	I0604 21:31:15.623916   14142 main.go:141] libmachine: (addons-450158)     </interface>
	I0604 21:31:15.623923   14142 main.go:141] libmachine: (addons-450158)     <interface type='network'>
	I0604 21:31:15.623933   14142 main.go:141] libmachine: (addons-450158)       <source network='default'/>
	I0604 21:31:15.623942   14142 main.go:141] libmachine: (addons-450158)       <model type='virtio'/>
	I0604 21:31:15.623952   14142 main.go:141] libmachine: (addons-450158)     </interface>
	I0604 21:31:15.623959   14142 main.go:141] libmachine: (addons-450158)     <serial type='pty'>
	I0604 21:31:15.623970   14142 main.go:141] libmachine: (addons-450158)       <target port='0'/>
	I0604 21:31:15.623979   14142 main.go:141] libmachine: (addons-450158)     </serial>
	I0604 21:31:15.623994   14142 main.go:141] libmachine: (addons-450158)     <console type='pty'>
	I0604 21:31:15.624016   14142 main.go:141] libmachine: (addons-450158)       <target type='serial' port='0'/>
	I0604 21:31:15.624029   14142 main.go:141] libmachine: (addons-450158)     </console>
	I0604 21:31:15.624037   14142 main.go:141] libmachine: (addons-450158)     <rng model='virtio'>
	I0604 21:31:15.624051   14142 main.go:141] libmachine: (addons-450158)       <backend model='random'>/dev/random</backend>
	I0604 21:31:15.624060   14142 main.go:141] libmachine: (addons-450158)     </rng>
	I0604 21:31:15.624071   14142 main.go:141] libmachine: (addons-450158)     
	I0604 21:31:15.624080   14142 main.go:141] libmachine: (addons-450158)     
	I0604 21:31:15.624088   14142 main.go:141] libmachine: (addons-450158)   </devices>
	I0604 21:31:15.624095   14142 main.go:141] libmachine: (addons-450158) </domain>
	I0604 21:31:15.624107   14142 main.go:141] libmachine: (addons-450158) 
	I0604 21:31:15.630469   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:af:4f:9b in network default
	I0604 21:31:15.631023   14142 main.go:141] libmachine: (addons-450158) Ensuring networks are active...
	I0604 21:31:15.631038   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:15.631793   14142 main.go:141] libmachine: (addons-450158) Ensuring network default is active
	I0604 21:31:15.632114   14142 main.go:141] libmachine: (addons-450158) Ensuring network mk-addons-450158 is active
	I0604 21:31:15.632634   14142 main.go:141] libmachine: (addons-450158) Getting domain xml...
	I0604 21:31:15.633377   14142 main.go:141] libmachine: (addons-450158) Creating domain...
	I0604 21:31:17.011527   14142 main.go:141] libmachine: (addons-450158) Waiting to get IP...
	I0604 21:31:17.012277   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:17.012651   14142 main.go:141] libmachine: (addons-450158) DBG | unable to find current IP address of domain addons-450158 in network mk-addons-450158
	I0604 21:31:17.012673   14142 main.go:141] libmachine: (addons-450158) DBG | I0604 21:31:17.012638   14163 retry.go:31] will retry after 235.359217ms: waiting for machine to come up
	I0604 21:31:17.250132   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:17.250501   14142 main.go:141] libmachine: (addons-450158) DBG | unable to find current IP address of domain addons-450158 in network mk-addons-450158
	I0604 21:31:17.250528   14142 main.go:141] libmachine: (addons-450158) DBG | I0604 21:31:17.250466   14163 retry.go:31] will retry after 272.483841ms: waiting for machine to come up
	I0604 21:31:17.524781   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:17.525096   14142 main.go:141] libmachine: (addons-450158) DBG | unable to find current IP address of domain addons-450158 in network mk-addons-450158
	I0604 21:31:17.525125   14142 main.go:141] libmachine: (addons-450158) DBG | I0604 21:31:17.525055   14163 retry.go:31] will retry after 348.421491ms: waiting for machine to come up
	I0604 21:31:17.875453   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:17.875855   14142 main.go:141] libmachine: (addons-450158) DBG | unable to find current IP address of domain addons-450158 in network mk-addons-450158
	I0604 21:31:17.875885   14142 main.go:141] libmachine: (addons-450158) DBG | I0604 21:31:17.875810   14163 retry.go:31] will retry after 597.712876ms: waiting for machine to come up
	I0604 21:31:18.475351   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:18.475742   14142 main.go:141] libmachine: (addons-450158) DBG | unable to find current IP address of domain addons-450158 in network mk-addons-450158
	I0604 21:31:18.475779   14142 main.go:141] libmachine: (addons-450158) DBG | I0604 21:31:18.475694   14163 retry.go:31] will retry after 662.344215ms: waiting for machine to come up
	I0604 21:31:19.139334   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:19.139790   14142 main.go:141] libmachine: (addons-450158) DBG | unable to find current IP address of domain addons-450158 in network mk-addons-450158
	I0604 21:31:19.139815   14142 main.go:141] libmachine: (addons-450158) DBG | I0604 21:31:19.139733   14163 retry.go:31] will retry after 926.776271ms: waiting for machine to come up
	I0604 21:31:20.068275   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:20.068686   14142 main.go:141] libmachine: (addons-450158) DBG | unable to find current IP address of domain addons-450158 in network mk-addons-450158
	I0604 21:31:20.068716   14142 main.go:141] libmachine: (addons-450158) DBG | I0604 21:31:20.068637   14163 retry.go:31] will retry after 804.20306ms: waiting for machine to come up
	I0604 21:31:20.874637   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:20.875061   14142 main.go:141] libmachine: (addons-450158) DBG | unable to find current IP address of domain addons-450158 in network mk-addons-450158
	I0604 21:31:20.875097   14142 main.go:141] libmachine: (addons-450158) DBG | I0604 21:31:20.875012   14163 retry.go:31] will retry after 908.904133ms: waiting for machine to come up
	I0604 21:31:21.784854   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:21.785247   14142 main.go:141] libmachine: (addons-450158) DBG | unable to find current IP address of domain addons-450158 in network mk-addons-450158
	I0604 21:31:21.785276   14142 main.go:141] libmachine: (addons-450158) DBG | I0604 21:31:21.785198   14163 retry.go:31] will retry after 1.269401318s: waiting for machine to come up
	I0604 21:31:23.056535   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:23.056966   14142 main.go:141] libmachine: (addons-450158) DBG | unable to find current IP address of domain addons-450158 in network mk-addons-450158
	I0604 21:31:23.057002   14142 main.go:141] libmachine: (addons-450158) DBG | I0604 21:31:23.056859   14163 retry.go:31] will retry after 1.658379187s: waiting for machine to come up
	I0604 21:31:24.717588   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:24.718003   14142 main.go:141] libmachine: (addons-450158) DBG | unable to find current IP address of domain addons-450158 in network mk-addons-450158
	I0604 21:31:24.718024   14142 main.go:141] libmachine: (addons-450158) DBG | I0604 21:31:24.717959   14163 retry.go:31] will retry after 2.445214176s: waiting for machine to come up
	I0604 21:31:27.165760   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:27.166118   14142 main.go:141] libmachine: (addons-450158) DBG | unable to find current IP address of domain addons-450158 in network mk-addons-450158
	I0604 21:31:27.166170   14142 main.go:141] libmachine: (addons-450158) DBG | I0604 21:31:27.166089   14163 retry.go:31] will retry after 2.4874617s: waiting for machine to come up
	I0604 21:31:29.656940   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:29.657341   14142 main.go:141] libmachine: (addons-450158) DBG | unable to find current IP address of domain addons-450158 in network mk-addons-450158
	I0604 21:31:29.657453   14142 main.go:141] libmachine: (addons-450158) DBG | I0604 21:31:29.657334   14163 retry.go:31] will retry after 3.187625875s: waiting for machine to come up
	I0604 21:31:32.848129   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:32.848440   14142 main.go:141] libmachine: (addons-450158) DBG | unable to find current IP address of domain addons-450158 in network mk-addons-450158
	I0604 21:31:32.848465   14142 main.go:141] libmachine: (addons-450158) DBG | I0604 21:31:32.848400   14163 retry.go:31] will retry after 3.844602897s: waiting for machine to come up
	I0604 21:31:36.694243   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:36.694601   14142 main.go:141] libmachine: (addons-450158) Found IP for machine: 192.168.39.110
	I0604 21:31:36.694638   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has current primary IP address 192.168.39.110 and MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:36.694649   14142 main.go:141] libmachine: (addons-450158) Reserving static IP address...
	I0604 21:31:36.694852   14142 main.go:141] libmachine: (addons-450158) DBG | unable to find host DHCP lease matching {name: "addons-450158", mac: "52:54:00:c2:57:1f", ip: "192.168.39.110"} in network mk-addons-450158
	I0604 21:31:36.761724   14142 main.go:141] libmachine: (addons-450158) Reserved static IP address: 192.168.39.110
	I0604 21:31:36.761754   14142 main.go:141] libmachine: (addons-450158) Waiting for SSH to be available...
	I0604 21:31:36.761764   14142 main.go:141] libmachine: (addons-450158) DBG | Getting to WaitForSSH function...
	I0604 21:31:36.764190   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:36.764480   14142 main.go:141] libmachine: (addons-450158) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:c2:57:1f", ip: ""} in network mk-addons-450158
	I0604 21:31:36.764508   14142 main.go:141] libmachine: (addons-450158) DBG | unable to find defined IP address of network mk-addons-450158 interface with MAC address 52:54:00:c2:57:1f
	I0604 21:31:36.764637   14142 main.go:141] libmachine: (addons-450158) DBG | Using SSH client type: external
	I0604 21:31:36.764662   14142 main.go:141] libmachine: (addons-450158) DBG | Using SSH private key: /home/jenkins/minikube-integration/19024-5817/.minikube/machines/addons-450158/id_rsa (-rw-------)
	I0604 21:31:36.764700   14142 main.go:141] libmachine: (addons-450158) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19024-5817/.minikube/machines/addons-450158/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0604 21:31:36.764716   14142 main.go:141] libmachine: (addons-450158) DBG | About to run SSH command:
	I0604 21:31:36.764763   14142 main.go:141] libmachine: (addons-450158) DBG | exit 0
	I0604 21:31:36.776390   14142 main.go:141] libmachine: (addons-450158) DBG | SSH cmd err, output: exit status 255: 
	I0604 21:31:36.776412   14142 main.go:141] libmachine: (addons-450158) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0604 21:31:36.776420   14142 main.go:141] libmachine: (addons-450158) DBG | command : exit 0
	I0604 21:31:36.776429   14142 main.go:141] libmachine: (addons-450158) DBG | err     : exit status 255
	I0604 21:31:36.776459   14142 main.go:141] libmachine: (addons-450158) DBG | output  : 
	I0604 21:31:39.778546   14142 main.go:141] libmachine: (addons-450158) DBG | Getting to WaitForSSH function...
	I0604 21:31:39.780640   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:39.780981   14142 main.go:141] libmachine: (addons-450158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:57:1f", ip: ""} in network mk-addons-450158: {Iface:virbr1 ExpiryTime:2024-06-04 22:31:29 +0000 UTC Type:0 Mac:52:54:00:c2:57:1f Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-450158 Clientid:01:52:54:00:c2:57:1f}
	I0604 21:31:39.781009   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined IP address 192.168.39.110 and MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:39.781088   14142 main.go:141] libmachine: (addons-450158) DBG | Using SSH client type: external
	I0604 21:31:39.781113   14142 main.go:141] libmachine: (addons-450158) DBG | Using SSH private key: /home/jenkins/minikube-integration/19024-5817/.minikube/machines/addons-450158/id_rsa (-rw-------)
	I0604 21:31:39.781157   14142 main.go:141] libmachine: (addons-450158) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19024-5817/.minikube/machines/addons-450158/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0604 21:31:39.781170   14142 main.go:141] libmachine: (addons-450158) DBG | About to run SSH command:
	I0604 21:31:39.781178   14142 main.go:141] libmachine: (addons-450158) DBG | exit 0
	I0604 21:31:39.908440   14142 main.go:141] libmachine: (addons-450158) DBG | SSH cmd err, output: <nil>: 
	I0604 21:31:39.908730   14142 main.go:141] libmachine: (addons-450158) KVM machine creation complete!
	I0604 21:31:39.908939   14142 main.go:141] libmachine: (addons-450158) Calling .GetConfigRaw
	I0604 21:31:39.909507   14142 main.go:141] libmachine: (addons-450158) Calling .DriverName
	I0604 21:31:39.909660   14142 main.go:141] libmachine: (addons-450158) Calling .DriverName
	I0604 21:31:39.909809   14142 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0604 21:31:39.909826   14142 main.go:141] libmachine: (addons-450158) Calling .GetState
	I0604 21:31:39.910890   14142 main.go:141] libmachine: Detecting operating system of created instance...
	I0604 21:31:39.910903   14142 main.go:141] libmachine: Waiting for SSH to be available...
	I0604 21:31:39.910911   14142 main.go:141] libmachine: Getting to WaitForSSH function...
	I0604 21:31:39.910918   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHHostname
	I0604 21:31:39.913114   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:39.913475   14142 main.go:141] libmachine: (addons-450158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:57:1f", ip: ""} in network mk-addons-450158: {Iface:virbr1 ExpiryTime:2024-06-04 22:31:29 +0000 UTC Type:0 Mac:52:54:00:c2:57:1f Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-450158 Clientid:01:52:54:00:c2:57:1f}
	I0604 21:31:39.913499   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined IP address 192.168.39.110 and MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:39.913661   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHPort
	I0604 21:31:39.913840   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHKeyPath
	I0604 21:31:39.913996   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHKeyPath
	I0604 21:31:39.914182   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHUsername
	I0604 21:31:39.914354   14142 main.go:141] libmachine: Using SSH client type: native
	I0604 21:31:39.914572   14142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0604 21:31:39.914585   14142 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0604 21:31:40.019681   14142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0604 21:31:40.019702   14142 main.go:141] libmachine: Detecting the provisioner...
	I0604 21:31:40.019709   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHHostname
	I0604 21:31:40.022171   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:40.022492   14142 main.go:141] libmachine: (addons-450158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:57:1f", ip: ""} in network mk-addons-450158: {Iface:virbr1 ExpiryTime:2024-06-04 22:31:29 +0000 UTC Type:0 Mac:52:54:00:c2:57:1f Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-450158 Clientid:01:52:54:00:c2:57:1f}
	I0604 21:31:40.022516   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined IP address 192.168.39.110 and MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:40.022678   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHPort
	I0604 21:31:40.022858   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHKeyPath
	I0604 21:31:40.023001   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHKeyPath
	I0604 21:31:40.023128   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHUsername
	I0604 21:31:40.023280   14142 main.go:141] libmachine: Using SSH client type: native
	I0604 21:31:40.023428   14142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0604 21:31:40.023438   14142 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0604 21:31:40.129237   14142 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0604 21:31:40.129285   14142 main.go:141] libmachine: found compatible host: buildroot
	I0604 21:31:40.129292   14142 main.go:141] libmachine: Provisioning with buildroot...
	I0604 21:31:40.129304   14142 main.go:141] libmachine: (addons-450158) Calling .GetMachineName
	I0604 21:31:40.129526   14142 buildroot.go:166] provisioning hostname "addons-450158"
	I0604 21:31:40.129551   14142 main.go:141] libmachine: (addons-450158) Calling .GetMachineName
	I0604 21:31:40.129697   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHHostname
	I0604 21:31:40.131700   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:40.132022   14142 main.go:141] libmachine: (addons-450158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:57:1f", ip: ""} in network mk-addons-450158: {Iface:virbr1 ExpiryTime:2024-06-04 22:31:29 +0000 UTC Type:0 Mac:52:54:00:c2:57:1f Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-450158 Clientid:01:52:54:00:c2:57:1f}
	I0604 21:31:40.132049   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined IP address 192.168.39.110 and MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:40.132202   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHPort
	I0604 21:31:40.132382   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHKeyPath
	I0604 21:31:40.132555   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHKeyPath
	I0604 21:31:40.132686   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHUsername
	I0604 21:31:40.132826   14142 main.go:141] libmachine: Using SSH client type: native
	I0604 21:31:40.132978   14142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0604 21:31:40.132990   14142 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-450158 && echo "addons-450158" | sudo tee /etc/hostname
	I0604 21:31:40.253853   14142 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-450158
	
	I0604 21:31:40.253894   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHHostname
	I0604 21:31:40.256763   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:40.257080   14142 main.go:141] libmachine: (addons-450158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:57:1f", ip: ""} in network mk-addons-450158: {Iface:virbr1 ExpiryTime:2024-06-04 22:31:29 +0000 UTC Type:0 Mac:52:54:00:c2:57:1f Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-450158 Clientid:01:52:54:00:c2:57:1f}
	I0604 21:31:40.257111   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined IP address 192.168.39.110 and MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:40.257259   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHPort
	I0604 21:31:40.257408   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHKeyPath
	I0604 21:31:40.257543   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHKeyPath
	I0604 21:31:40.257672   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHUsername
	I0604 21:31:40.257803   14142 main.go:141] libmachine: Using SSH client type: native
	I0604 21:31:40.257946   14142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0604 21:31:40.257961   14142 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-450158' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-450158/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-450158' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0604 21:31:40.372627   14142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0604 21:31:40.372651   14142 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19024-5817/.minikube CaCertPath:/home/jenkins/minikube-integration/19024-5817/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19024-5817/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19024-5817/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19024-5817/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19024-5817/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19024-5817/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19024-5817/.minikube}
	I0604 21:31:40.372688   14142 buildroot.go:174] setting up certificates
	I0604 21:31:40.372706   14142 provision.go:84] configureAuth start
	I0604 21:31:40.372721   14142 main.go:141] libmachine: (addons-450158) Calling .GetMachineName
	I0604 21:31:40.373009   14142 main.go:141] libmachine: (addons-450158) Calling .GetIP
	I0604 21:31:40.375347   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:40.375666   14142 main.go:141] libmachine: (addons-450158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:57:1f", ip: ""} in network mk-addons-450158: {Iface:virbr1 ExpiryTime:2024-06-04 22:31:29 +0000 UTC Type:0 Mac:52:54:00:c2:57:1f Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-450158 Clientid:01:52:54:00:c2:57:1f}
	I0604 21:31:40.375697   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined IP address 192.168.39.110 and MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:40.375843   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHHostname
	I0604 21:31:40.377795   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:40.378119   14142 main.go:141] libmachine: (addons-450158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:57:1f", ip: ""} in network mk-addons-450158: {Iface:virbr1 ExpiryTime:2024-06-04 22:31:29 +0000 UTC Type:0 Mac:52:54:00:c2:57:1f Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-450158 Clientid:01:52:54:00:c2:57:1f}
	I0604 21:31:40.378135   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined IP address 192.168.39.110 and MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:40.378313   14142 provision.go:143] copyHostCerts
	I0604 21:31:40.378395   14142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19024-5817/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19024-5817/.minikube/cert.pem (1123 bytes)
	I0604 21:31:40.378524   14142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19024-5817/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19024-5817/.minikube/key.pem (1675 bytes)
	I0604 21:31:40.378584   14142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19024-5817/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19024-5817/.minikube/ca.pem (1082 bytes)
	I0604 21:31:40.378649   14142 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19024-5817/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19024-5817/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19024-5817/.minikube/certs/ca-key.pem org=jenkins.addons-450158 san=[127.0.0.1 192.168.39.110 addons-450158 localhost minikube]
	I0604 21:31:40.503722   14142 provision.go:177] copyRemoteCerts
	I0604 21:31:40.503771   14142 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0604 21:31:40.503789   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHHostname
	I0604 21:31:40.506181   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:40.506440   14142 main.go:141] libmachine: (addons-450158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:57:1f", ip: ""} in network mk-addons-450158: {Iface:virbr1 ExpiryTime:2024-06-04 22:31:29 +0000 UTC Type:0 Mac:52:54:00:c2:57:1f Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-450158 Clientid:01:52:54:00:c2:57:1f}
	I0604 21:31:40.506464   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined IP address 192.168.39.110 and MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:40.506648   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHPort
	I0604 21:31:40.506871   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHKeyPath
	I0604 21:31:40.507037   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHUsername
	I0604 21:31:40.507233   14142 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19024-5817/.minikube/machines/addons-450158/id_rsa Username:docker}
	I0604 21:31:40.590087   14142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19024-5817/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0604 21:31:40.613400   14142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19024-5817/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0604 21:31:40.636448   14142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19024-5817/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0604 21:31:40.659363   14142 provision.go:87] duration metric: took 286.645223ms to configureAuth
	I0604 21:31:40.659385   14142 buildroot.go:189] setting minikube options for container-runtime
	I0604 21:31:40.659553   14142 config.go:182] Loaded profile config "addons-450158": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
	I0604 21:31:40.659576   14142 main.go:141] libmachine: Checking connection to Docker...
	I0604 21:31:40.659591   14142 main.go:141] libmachine: (addons-450158) Calling .GetURL
	I0604 21:31:40.660703   14142 main.go:141] libmachine: (addons-450158) DBG | Using libvirt version 6000000
	I0604 21:31:40.662728   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:40.663056   14142 main.go:141] libmachine: (addons-450158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:57:1f", ip: ""} in network mk-addons-450158: {Iface:virbr1 ExpiryTime:2024-06-04 22:31:29 +0000 UTC Type:0 Mac:52:54:00:c2:57:1f Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-450158 Clientid:01:52:54:00:c2:57:1f}
	I0604 21:31:40.663079   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined IP address 192.168.39.110 and MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:40.663221   14142 main.go:141] libmachine: Docker is up and running!
	I0604 21:31:40.663236   14142 main.go:141] libmachine: Reticulating splines...
	I0604 21:31:40.663243   14142 client.go:171] duration metric: took 25.799762774s to LocalClient.Create
	I0604 21:31:40.663265   14142 start.go:167] duration metric: took 25.79981536s to libmachine.API.Create "addons-450158"
	I0604 21:31:40.663278   14142 start.go:293] postStartSetup for "addons-450158" (driver="kvm2")
	I0604 21:31:40.663291   14142 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0604 21:31:40.663314   14142 main.go:141] libmachine: (addons-450158) Calling .DriverName
	I0604 21:31:40.663526   14142 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0604 21:31:40.663546   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHHostname
	I0604 21:31:40.665484   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:40.665740   14142 main.go:141] libmachine: (addons-450158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:57:1f", ip: ""} in network mk-addons-450158: {Iface:virbr1 ExpiryTime:2024-06-04 22:31:29 +0000 UTC Type:0 Mac:52:54:00:c2:57:1f Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-450158 Clientid:01:52:54:00:c2:57:1f}
	I0604 21:31:40.665766   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined IP address 192.168.39.110 and MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:40.665912   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHPort
	I0604 21:31:40.666067   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHKeyPath
	I0604 21:31:40.666215   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHUsername
	I0604 21:31:40.666375   14142 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19024-5817/.minikube/machines/addons-450158/id_rsa Username:docker}
	I0604 21:31:40.751126   14142 ssh_runner.go:195] Run: cat /etc/os-release
	I0604 21:31:40.755351   14142 info.go:137] Remote host: Buildroot 2023.02.9
	I0604 21:31:40.755378   14142 filesync.go:126] Scanning /home/jenkins/minikube-integration/19024-5817/.minikube/addons for local assets ...
	I0604 21:31:40.755468   14142 filesync.go:126] Scanning /home/jenkins/minikube-integration/19024-5817/.minikube/files for local assets ...
	I0604 21:31:40.755498   14142 start.go:296] duration metric: took 92.214065ms for postStartSetup
	I0604 21:31:40.755531   14142 main.go:141] libmachine: (addons-450158) Calling .GetConfigRaw
	I0604 21:31:40.756052   14142 main.go:141] libmachine: (addons-450158) Calling .GetIP
	I0604 21:31:40.758687   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:40.759083   14142 main.go:141] libmachine: (addons-450158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:57:1f", ip: ""} in network mk-addons-450158: {Iface:virbr1 ExpiryTime:2024-06-04 22:31:29 +0000 UTC Type:0 Mac:52:54:00:c2:57:1f Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-450158 Clientid:01:52:54:00:c2:57:1f}
	I0604 21:31:40.759112   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined IP address 192.168.39.110 and MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:40.759290   14142 profile.go:143] Saving config to /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/config.json ...
	I0604 21:31:40.759493   14142 start.go:128] duration metric: took 25.912535395s to createHost
	I0604 21:31:40.759519   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHHostname
	I0604 21:31:40.761640   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:40.761954   14142 main.go:141] libmachine: (addons-450158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:57:1f", ip: ""} in network mk-addons-450158: {Iface:virbr1 ExpiryTime:2024-06-04 22:31:29 +0000 UTC Type:0 Mac:52:54:00:c2:57:1f Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-450158 Clientid:01:52:54:00:c2:57:1f}
	I0604 21:31:40.761979   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined IP address 192.168.39.110 and MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:40.762115   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHPort
	I0604 21:31:40.762292   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHKeyPath
	I0604 21:31:40.762442   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHKeyPath
	I0604 21:31:40.762577   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHUsername
	I0604 21:31:40.762731   14142 main.go:141] libmachine: Using SSH client type: native
	I0604 21:31:40.762876   14142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0604 21:31:40.762886   14142 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0604 21:31:40.869021   14142 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717536700.838558749
	
	I0604 21:31:40.869041   14142 fix.go:216] guest clock: 1717536700.838558749
	I0604 21:31:40.869047   14142 fix.go:229] Guest: 2024-06-04 21:31:40.838558749 +0000 UTC Remote: 2024-06-04 21:31:40.759507112 +0000 UTC m=+26.007419351 (delta=79.051637ms)
	I0604 21:31:40.869077   14142 fix.go:200] guest clock delta is within tolerance: 79.051637ms
	I0604 21:31:40.869082   14142 start.go:83] releasing machines lock for "addons-450158", held for 26.022187775s
	I0604 21:31:40.869103   14142 main.go:141] libmachine: (addons-450158) Calling .DriverName
	I0604 21:31:40.869347   14142 main.go:141] libmachine: (addons-450158) Calling .GetIP
	I0604 21:31:40.871690   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:40.872154   14142 main.go:141] libmachine: (addons-450158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:57:1f", ip: ""} in network mk-addons-450158: {Iface:virbr1 ExpiryTime:2024-06-04 22:31:29 +0000 UTC Type:0 Mac:52:54:00:c2:57:1f Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-450158 Clientid:01:52:54:00:c2:57:1f}
	I0604 21:31:40.872174   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined IP address 192.168.39.110 and MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:40.872400   14142 main.go:141] libmachine: (addons-450158) Calling .DriverName
	I0604 21:31:40.872862   14142 main.go:141] libmachine: (addons-450158) Calling .DriverName
	I0604 21:31:40.873039   14142 main.go:141] libmachine: (addons-450158) Calling .DriverName
	I0604 21:31:40.873132   14142 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0604 21:31:40.873187   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHHostname
	I0604 21:31:40.873192   14142 ssh_runner.go:195] Run: cat /version.json
	I0604 21:31:40.873205   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHHostname
	I0604 21:31:40.875534   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:40.875705   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:40.875869   14142 main.go:141] libmachine: (addons-450158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:57:1f", ip: ""} in network mk-addons-450158: {Iface:virbr1 ExpiryTime:2024-06-04 22:31:29 +0000 UTC Type:0 Mac:52:54:00:c2:57:1f Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-450158 Clientid:01:52:54:00:c2:57:1f}
	I0604 21:31:40.875897   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined IP address 192.168.39.110 and MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:40.875992   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHPort
	I0604 21:31:40.876129   14142 main.go:141] libmachine: (addons-450158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:57:1f", ip: ""} in network mk-addons-450158: {Iface:virbr1 ExpiryTime:2024-06-04 22:31:29 +0000 UTC Type:0 Mac:52:54:00:c2:57:1f Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-450158 Clientid:01:52:54:00:c2:57:1f}
	I0604 21:31:40.876141   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHKeyPath
	I0604 21:31:40.876149   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined IP address 192.168.39.110 and MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:40.876325   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHPort
	I0604 21:31:40.876327   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHUsername
	I0604 21:31:40.876499   14142 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19024-5817/.minikube/machines/addons-450158/id_rsa Username:docker}
	I0604 21:31:40.876528   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHKeyPath
	I0604 21:31:40.876641   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHUsername
	I0604 21:31:40.876746   14142 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19024-5817/.minikube/machines/addons-450158/id_rsa Username:docker}
	I0604 21:31:40.953102   14142 ssh_runner.go:195] Run: systemctl --version
	I0604 21:31:41.051203   14142 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0604 21:31:41.057056   14142 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0604 21:31:41.057127   14142 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0604 21:31:41.072365   14142 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0604 21:31:41.072385   14142 start.go:494] detecting cgroup driver to use...
	I0604 21:31:41.072442   14142 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0604 21:31:41.103531   14142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0604 21:31:41.116587   14142 docker.go:217] disabling cri-docker service (if available) ...
	I0604 21:31:41.116641   14142 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0604 21:31:41.130299   14142 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0604 21:31:41.143394   14142 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0604 21:31:41.258094   14142 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0604 21:31:41.406224   14142 docker.go:233] disabling docker service ...
	I0604 21:31:41.406293   14142 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0604 21:31:41.420397   14142 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0604 21:31:41.432911   14142 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0604 21:31:41.567180   14142 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0604 21:31:41.686487   14142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0604 21:31:41.700343   14142 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0604 21:31:41.718250   14142 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0604 21:31:41.728258   14142 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0604 21:31:41.737912   14142 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0604 21:31:41.737972   14142 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0604 21:31:41.747632   14142 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0604 21:31:41.757317   14142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0604 21:31:41.766928   14142 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0604 21:31:41.776678   14142 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0604 21:31:41.786993   14142 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0604 21:31:41.796954   14142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0604 21:31:41.807463   14142 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0604 21:31:41.817753   14142 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0604 21:31:41.826889   14142 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0604 21:31:41.826949   14142 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0604 21:31:41.840581   14142 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0604 21:31:41.849886   14142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 21:31:41.964162   14142 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0604 21:31:41.992003   14142 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0604 21:31:41.992093   14142 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0604 21:31:41.996251   14142 retry.go:31] will retry after 1.230722463s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0604 21:31:43.227652   14142 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0604 21:31:43.232733   14142 start.go:562] Will wait 60s for crictl version
	I0604 21:31:43.232808   14142 ssh_runner.go:195] Run: which crictl
	I0604 21:31:43.236530   14142 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0604 21:31:43.271806   14142 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.17
	RuntimeApiVersion:  v1
	I0604 21:31:43.271890   14142 ssh_runner.go:195] Run: containerd --version
	I0604 21:31:43.301728   14142 ssh_runner.go:195] Run: containerd --version
	I0604 21:31:43.328756   14142 out.go:177] * Preparing Kubernetes v1.30.1 on containerd 1.7.17 ...
	I0604 21:31:43.329996   14142 main.go:141] libmachine: (addons-450158) Calling .GetIP
	I0604 21:31:43.332645   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:43.332945   14142 main.go:141] libmachine: (addons-450158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:57:1f", ip: ""} in network mk-addons-450158: {Iface:virbr1 ExpiryTime:2024-06-04 22:31:29 +0000 UTC Type:0 Mac:52:54:00:c2:57:1f Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-450158 Clientid:01:52:54:00:c2:57:1f}
	I0604 21:31:43.332963   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined IP address 192.168.39.110 and MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:31:43.333183   14142 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0604 21:31:43.337217   14142 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0604 21:31:43.349257   14142 kubeadm.go:877] updating cluster {Name:addons-450158 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:addons-450158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0604 21:31:43.349363   14142 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime containerd
	I0604 21:31:43.349408   14142 ssh_runner.go:195] Run: sudo crictl images --output json
	I0604 21:31:43.381007   14142 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0604 21:31:43.381061   14142 ssh_runner.go:195] Run: which lz4
	I0604 21:31:43.384878   14142 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0604 21:31:43.388966   14142 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0604 21:31:43.388987   14142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19024-5817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (393919565 bytes)
	I0604 21:31:44.661452   14142 containerd.go:563] duration metric: took 1.276596215s to copy over tarball
	I0604 21:31:44.661519   14142 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0604 21:31:46.854691   14142 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.193137993s)
	I0604 21:31:46.854727   14142 containerd.go:570] duration metric: took 2.193250727s to extract the tarball
	I0604 21:31:46.854751   14142 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0604 21:31:46.892150   14142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 21:31:47.003597   14142 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0604 21:31:47.035427   14142 ssh_runner.go:195] Run: sudo crictl images --output json
	I0604 21:31:47.074331   14142 retry.go:31] will retry after 195.856546ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-06-04T21:31:47Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0604 21:31:47.270799   14142 ssh_runner.go:195] Run: sudo crictl images --output json
	I0604 21:31:47.305084   14142 containerd.go:627] all images are preloaded for containerd runtime.
	I0604 21:31:47.305103   14142 cache_images.go:84] Images are preloaded, skipping loading
	I0604 21:31:47.305110   14142 kubeadm.go:928] updating node { 192.168.39.110 8443 v1.30.1 containerd true true} ...
	I0604 21:31:47.305219   14142 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-450158 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:addons-450158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0604 21:31:47.305274   14142 ssh_runner.go:195] Run: sudo crictl info
	I0604 21:31:47.337921   14142 cni.go:84] Creating CNI manager for ""
	I0604 21:31:47.337948   14142 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0604 21:31:47.337960   14142 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0604 21:31:47.337988   14142 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-450158 NodeName:addons-450158 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0604 21:31:47.338120   14142 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.110
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-450158"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.110
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0604 21:31:47.338176   14142 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0604 21:31:47.348503   14142 binaries.go:44] Found k8s binaries, skipping transfer
	I0604 21:31:47.348593   14142 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0604 21:31:47.358314   14142 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0604 21:31:47.374807   14142 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0604 21:31:47.391365   14142 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2173 bytes)
	I0604 21:31:47.407951   14142 ssh_runner.go:195] Run: grep 192.168.39.110	control-plane.minikube.internal$ /etc/hosts
	I0604 21:31:47.412037   14142 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.110	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0604 21:31:47.424581   14142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 21:31:47.534356   14142 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0604 21:31:47.556149   14142 certs.go:68] Setting up /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158 for IP: 192.168.39.110
	I0604 21:31:47.556174   14142 certs.go:194] generating shared ca certs ...
	I0604 21:31:47.556194   14142 certs.go:226] acquiring lock for ca certs: {Name:mkcd2e2f76bea0640119fc321b5a17ed341cc788 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 21:31:47.556354   14142 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19024-5817/.minikube/ca.key
	I0604 21:31:47.696106   14142 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19024-5817/.minikube/ca.crt ...
	I0604 21:31:47.696135   14142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19024-5817/.minikube/ca.crt: {Name:mkd405278c8a211ca5647bef5a46008594efbfff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 21:31:47.696316   14142 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19024-5817/.minikube/ca.key ...
	I0604 21:31:47.696330   14142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19024-5817/.minikube/ca.key: {Name:mk038aaf7b743e852981c9d967190c7adb49fa4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 21:31:47.696432   14142 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19024-5817/.minikube/proxy-client-ca.key
	I0604 21:31:47.867964   14142 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19024-5817/.minikube/proxy-client-ca.crt ...
	I0604 21:31:47.867993   14142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19024-5817/.minikube/proxy-client-ca.crt: {Name:mk945193f228ccd6cebc34d97d7897b5295d6279 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 21:31:47.868170   14142 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19024-5817/.minikube/proxy-client-ca.key ...
	I0604 21:31:47.868185   14142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19024-5817/.minikube/proxy-client-ca.key: {Name:mk452dea0dff9eb7bcab2221d17cad5a12fac179 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 21:31:47.868279   14142 certs.go:256] generating profile certs ...
	I0604 21:31:47.868347   14142 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.key
	I0604 21:31:47.868365   14142 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt with IP's: []
	I0604 21:31:48.017750   14142 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt ...
	I0604 21:31:48.017780   14142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt: {Name:mk31fb4add8b1a432cdc4364004aba1460f83dc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 21:31:48.017950   14142 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.key ...
	I0604 21:31:48.017965   14142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.key: {Name:mkbb67db70e2837246ad31a258fe7bfc721887f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 21:31:48.018059   14142 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/apiserver.key.8d9e2d79
	I0604 21:31:48.018082   14142 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/apiserver.crt.8d9e2d79 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110]
	I0604 21:31:48.100976   14142 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/apiserver.crt.8d9e2d79 ...
	I0604 21:31:48.101004   14142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/apiserver.crt.8d9e2d79: {Name:mk4d5799aec1fbeecc67470785c5e62fa6367d3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 21:31:48.101174   14142 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/apiserver.key.8d9e2d79 ...
	I0604 21:31:48.101192   14142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/apiserver.key.8d9e2d79: {Name:mk40457484032548c744d0df31921ad9f15ea955 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 21:31:48.101292   14142 certs.go:381] copying /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/apiserver.crt.8d9e2d79 -> /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/apiserver.crt
	I0604 21:31:48.101397   14142 certs.go:385] copying /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/apiserver.key.8d9e2d79 -> /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/apiserver.key
	I0604 21:31:48.101476   14142 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/proxy-client.key
	I0604 21:31:48.101500   14142 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/proxy-client.crt with IP's: []
	I0604 21:31:48.148756   14142 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/proxy-client.crt ...
	I0604 21:31:48.148781   14142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/proxy-client.crt: {Name:mk511f5a78f8c391d078ce672a6c17df9b6ce520 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 21:31:48.148916   14142 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/proxy-client.key ...
	I0604 21:31:48.148926   14142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/proxy-client.key: {Name:mke009e599bd58c772006d21c2ad3164d7160de5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 21:31:48.149069   14142 certs.go:484] found cert: /home/jenkins/minikube-integration/19024-5817/.minikube/certs/ca-key.pem (1679 bytes)
	I0604 21:31:48.149099   14142 certs.go:484] found cert: /home/jenkins/minikube-integration/19024-5817/.minikube/certs/ca.pem (1082 bytes)
	I0604 21:31:48.149120   14142 certs.go:484] found cert: /home/jenkins/minikube-integration/19024-5817/.minikube/certs/cert.pem (1123 bytes)
	I0604 21:31:48.149150   14142 certs.go:484] found cert: /home/jenkins/minikube-integration/19024-5817/.minikube/certs/key.pem (1675 bytes)
	I0604 21:31:48.150193   14142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19024-5817/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0604 21:31:48.176005   14142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19024-5817/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0604 21:31:48.199701   14142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19024-5817/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0604 21:31:48.225416   14142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19024-5817/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0604 21:31:48.255305   14142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0604 21:31:48.278755   14142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0604 21:31:48.305924   14142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0604 21:31:48.329886   14142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0604 21:31:48.352990   14142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19024-5817/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0604 21:31:48.375866   14142 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0604 21:31:48.392188   14142 ssh_runner.go:195] Run: openssl version
	I0604 21:31:48.398120   14142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0604 21:31:48.409040   14142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0604 21:31:48.413518   14142 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  4 21:31 /usr/share/ca-certificates/minikubeCA.pem
	I0604 21:31:48.413562   14142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0604 21:31:48.419271   14142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0604 21:31:48.430378   14142 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0604 21:31:48.434479   14142 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0604 21:31:48.434531   14142 kubeadm.go:391] StartCluster: {Name:addons-450158 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 C
lusterName:addons-450158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0604 21:31:48.434623   14142 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0604 21:31:48.434668   14142 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0604 21:31:48.477374   14142 cri.go:89] found id: ""
	I0604 21:31:48.477438   14142 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0604 21:31:48.488030   14142 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0604 21:31:48.497880   14142 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0604 21:31:48.508261   14142 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0604 21:31:48.508274   14142 kubeadm.go:156] found existing configuration files:
	
	I0604 21:31:48.508306   14142 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0604 21:31:48.517599   14142 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0604 21:31:48.517641   14142 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0604 21:31:48.527159   14142 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0604 21:31:48.536506   14142 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0604 21:31:48.536568   14142 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0604 21:31:48.546383   14142 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0604 21:31:48.555544   14142 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0604 21:31:48.555594   14142 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0604 21:31:48.565437   14142 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0604 21:31:48.574772   14142 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0604 21:31:48.574809   14142 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0604 21:31:48.584474   14142 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0604 21:31:48.642025   14142 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0604 21:31:48.642122   14142 kubeadm.go:309] [preflight] Running pre-flight checks
	I0604 21:31:48.781385   14142 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0604 21:31:48.781501   14142 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0604 21:31:48.781621   14142 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0604 21:31:48.983989   14142 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0604 21:31:48.987075   14142 out.go:204]   - Generating certificates and keys ...
	I0604 21:31:48.987178   14142 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0604 21:31:48.987263   14142 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0604 21:31:49.132237   14142 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0604 21:31:49.248214   14142 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0604 21:31:49.477335   14142 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0604 21:31:49.817863   14142 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0604 21:31:50.004708   14142 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0604 21:31:50.004834   14142 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-450158 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I0604 21:31:50.166494   14142 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0604 21:31:50.166622   14142 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-450158 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I0604 21:31:50.395994   14142 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0604 21:31:50.685820   14142 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0604 21:31:50.871590   14142 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0604 21:31:50.871672   14142 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0604 21:31:50.940155   14142 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0604 21:31:51.069299   14142 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0604 21:31:51.127483   14142 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0604 21:31:51.318461   14142 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0604 21:31:51.472911   14142 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0604 21:31:51.473500   14142 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0604 21:31:51.475893   14142 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0604 21:31:51.477900   14142 out.go:204]   - Booting up control plane ...
	I0604 21:31:51.478005   14142 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0604 21:31:51.478093   14142 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0604 21:31:51.478181   14142 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0604 21:31:51.494012   14142 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0604 21:31:51.494498   14142 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0604 21:31:51.494576   14142 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0604 21:31:51.620620   14142 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0604 21:31:51.620752   14142 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0604 21:31:52.121937   14142 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.903445ms
	I0604 21:31:52.122049   14142 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0604 21:31:57.121282   14142 kubeadm.go:309] [api-check] The API server is healthy after 5.001686816s
	I0604 21:31:57.137986   14142 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0604 21:31:57.164097   14142 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0604 21:31:57.197292   14142 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0604 21:31:57.197514   14142 kubeadm.go:309] [mark-control-plane] Marking the node addons-450158 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0604 21:31:57.213496   14142 kubeadm.go:309] [bootstrap-token] Using token: s0hc7u.qc73qsa93xwg29kc
	I0604 21:31:57.214918   14142 out.go:204]   - Configuring RBAC rules ...
	I0604 21:31:57.215059   14142 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0604 21:31:57.220898   14142 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0604 21:31:57.235532   14142 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0604 21:31:57.243769   14142 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0604 21:31:57.247317   14142 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0604 21:31:57.252166   14142 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0604 21:31:57.528146   14142 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0604 21:31:57.962830   14142 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0604 21:31:58.526656   14142 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0604 21:31:58.527827   14142 kubeadm.go:309] 
	I0604 21:31:58.527907   14142 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0604 21:31:58.527916   14142 kubeadm.go:309] 
	I0604 21:31:58.527997   14142 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0604 21:31:58.528006   14142 kubeadm.go:309] 
	I0604 21:31:58.528050   14142 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0604 21:31:58.528169   14142 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0604 21:31:58.528253   14142 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0604 21:31:58.528263   14142 kubeadm.go:309] 
	I0604 21:31:58.528336   14142 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0604 21:31:58.528345   14142 kubeadm.go:309] 
	I0604 21:31:58.528416   14142 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0604 21:31:58.528425   14142 kubeadm.go:309] 
	I0604 21:31:58.528535   14142 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0604 21:31:58.528652   14142 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0604 21:31:58.528771   14142 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0604 21:31:58.528781   14142 kubeadm.go:309] 
	I0604 21:31:58.528891   14142 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0604 21:31:58.528995   14142 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0604 21:31:58.529008   14142 kubeadm.go:309] 
	I0604 21:31:58.529123   14142 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token s0hc7u.qc73qsa93xwg29kc \
	I0604 21:31:58.529308   14142 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:72538ed5b3af52f4e25b942c42dcf43be18cb45e74e01a16adf7009af68659f4 \
	I0604 21:31:58.529345   14142 kubeadm.go:309] 	--control-plane 
	I0604 21:31:58.529354   14142 kubeadm.go:309] 
	I0604 21:31:58.529473   14142 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0604 21:31:58.529482   14142 kubeadm.go:309] 
	I0604 21:31:58.529577   14142 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token s0hc7u.qc73qsa93xwg29kc \
	I0604 21:31:58.529709   14142 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:72538ed5b3af52f4e25b942c42dcf43be18cb45e74e01a16adf7009af68659f4 
	I0604 21:31:58.529878   14142 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0604 21:31:58.529991   14142 cni.go:84] Creating CNI manager for ""
	I0604 21:31:58.530007   14142 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0604 21:31:58.531816   14142 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0604 21:31:58.533162   14142 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0604 21:31:58.544574   14142 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0604 21:31:58.563068   14142 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0604 21:31:58.563151   14142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:31:58.563174   14142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-450158 minikube.k8s.io/updated_at=2024_06_04T21_31_58_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=901ac483c3e1097c63cda7493d918b612a8127f5 minikube.k8s.io/name=addons-450158 minikube.k8s.io/primary=true
	I0604 21:31:58.687681   14142 ops.go:34] apiserver oom_adj: -16
	I0604 21:31:58.690357   14142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:31:59.191358   14142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:31:59.691282   14142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:32:00.191179   14142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:32:00.690730   14142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:32:01.190438   14142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:32:01.690555   14142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:32:02.191073   14142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:32:02.691351   14142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:32:03.190947   14142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:32:03.690909   14142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:32:04.191105   14142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:32:04.690657   14142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:32:05.190992   14142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:32:05.691410   14142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:32:06.190573   14142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:32:06.690747   14142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:32:07.190674   14142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:32:07.690663   14142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:32:08.191048   14142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:32:08.691070   14142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:32:09.191099   14142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:32:09.691271   14142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:32:10.191429   14142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:32:10.690971   14142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:32:11.190999   14142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:32:11.274371   14142 kubeadm.go:1107] duration metric: took 12.711276799s to wait for elevateKubeSystemPrivileges
	W0604 21:32:11.274406   14142 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0604 21:32:11.274413   14142 kubeadm.go:393] duration metric: took 22.839886879s to StartCluster
	I0604 21:32:11.274434   14142 settings.go:142] acquiring lock: {Name:mkbafbf8fac51289173454c7e179fd6ee7954db6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 21:32:11.274549   14142 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19024-5817/kubeconfig
	I0604 21:32:11.274932   14142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19024-5817/kubeconfig: {Name:mk96cdd816fd5a0c7bae35f929f47f35a30e6d32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 21:32:11.275113   14142 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0604 21:32:11.275138   14142 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0604 21:32:11.277303   14142 out.go:177] * Verifying Kubernetes components...
	I0604 21:32:11.275204   14142 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0604 21:32:11.275371   14142 config.go:182] Loaded profile config "addons-450158": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
	I0604 21:32:11.277411   14142 addons.go:69] Setting cloud-spanner=true in profile "addons-450158"
	I0604 21:32:11.278915   14142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 21:32:11.278932   14142 addons.go:234] Setting addon cloud-spanner=true in "addons-450158"
	I0604 21:32:11.278971   14142 host.go:66] Checking if "addons-450158" exists ...
	I0604 21:32:11.277419   14142 addons.go:69] Setting gcp-auth=true in profile "addons-450158"
	I0604 21:32:11.279005   14142 mustload.go:65] Loading cluster: addons-450158
	I0604 21:32:11.277425   14142 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-450158"
	I0604 21:32:11.279098   14142 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-450158"
	I0604 21:32:11.279150   14142 host.go:66] Checking if "addons-450158" exists ...
	I0604 21:32:11.279198   14142 config.go:182] Loaded profile config "addons-450158": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
	I0604 21:32:11.277424   14142 addons.go:69] Setting yakd=true in profile "addons-450158"
	I0604 21:32:11.279262   14142 addons.go:234] Setting addon yakd=true in "addons-450158"
	I0604 21:32:11.279303   14142 host.go:66] Checking if "addons-450158" exists ...
	I0604 21:32:11.277427   14142 addons.go:69] Setting inspektor-gadget=true in profile "addons-450158"
	I0604 21:32:11.279381   14142 addons.go:234] Setting addon inspektor-gadget=true in "addons-450158"
	I0604 21:32:11.279414   14142 host.go:66] Checking if "addons-450158" exists ...
	I0604 21:32:11.279456   14142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:32:11.279485   14142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:32:11.279503   14142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:32:11.279520   14142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:32:11.279522   14142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:32:11.279535   14142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:32:11.277432   14142 addons.go:69] Setting default-storageclass=true in profile "addons-450158"
	I0604 21:32:11.279672   14142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:32:11.279694   14142 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-450158"
	I0604 21:32:11.279708   14142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:32:11.279780   14142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:32:11.279800   14142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:32:11.277436   14142 addons.go:69] Setting metrics-server=true in profile "addons-450158"
	I0604 21:32:11.279954   14142 addons.go:234] Setting addon metrics-server=true in "addons-450158"
	I0604 21:32:11.279990   14142 host.go:66] Checking if "addons-450158" exists ...
	I0604 21:32:11.277439   14142 addons.go:69] Setting ingress=true in profile "addons-450158"
	I0604 21:32:11.280024   14142 addons.go:234] Setting addon ingress=true in "addons-450158"
	I0604 21:32:11.280046   14142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:32:11.280055   14142 host.go:66] Checking if "addons-450158" exists ...
	I0604 21:32:11.280075   14142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:32:11.280343   14142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:32:11.280364   14142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:32:11.280402   14142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:32:11.280422   14142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:32:11.277445   14142 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-450158"
	I0604 21:32:11.280572   14142 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-450158"
	I0604 21:32:11.280603   14142 host.go:66] Checking if "addons-450158" exists ...
	I0604 21:32:11.280959   14142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:32:11.280994   14142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:32:11.277435   14142 addons.go:69] Setting helm-tiller=true in profile "addons-450158"
	I0604 21:32:11.281238   14142 addons.go:234] Setting addon helm-tiller=true in "addons-450158"
	I0604 21:32:11.281264   14142 host.go:66] Checking if "addons-450158" exists ...
	I0604 21:32:11.281604   14142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:32:11.281622   14142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:32:11.277453   14142 addons.go:69] Setting volcano=true in profile "addons-450158"
	I0604 21:32:11.283887   14142 addons.go:234] Setting addon volcano=true in "addons-450158"
	I0604 21:32:11.283926   14142 host.go:66] Checking if "addons-450158" exists ...
	I0604 21:32:11.277458   14142 addons.go:69] Setting registry=true in profile "addons-450158"
	I0604 21:32:11.289619   14142 addons.go:234] Setting addon registry=true in "addons-450158"
	I0604 21:32:11.289658   14142 host.go:66] Checking if "addons-450158" exists ...
	I0604 21:32:11.290044   14142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:32:11.290076   14142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:32:11.277460   14142 addons.go:69] Setting volumesnapshots=true in profile "addons-450158"
	I0604 21:32:11.291381   14142 addons.go:234] Setting addon volumesnapshots=true in "addons-450158"
	I0604 21:32:11.291415   14142 host.go:66] Checking if "addons-450158" exists ...
	I0604 21:32:11.277452   14142 addons.go:69] Setting ingress-dns=true in profile "addons-450158"
	I0604 21:32:11.292099   14142 addons.go:234] Setting addon ingress-dns=true in "addons-450158"
	I0604 21:32:11.292159   14142 host.go:66] Checking if "addons-450158" exists ...
	I0604 21:32:11.277461   14142 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-450158"
	I0604 21:32:11.295718   14142 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-450158"
	I0604 21:32:11.296090   14142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:32:11.296129   14142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:32:11.277490   14142 addons.go:69] Setting storage-provisioner=true in profile "addons-450158"
	I0604 21:32:11.296218   14142 addons.go:234] Setting addon storage-provisioner=true in "addons-450158"
	I0604 21:32:11.296272   14142 host.go:66] Checking if "addons-450158" exists ...
	I0604 21:32:11.296668   14142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:32:11.298690   14142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:32:11.300615   14142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34917
	I0604 21:32:11.300775   14142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46457
	I0604 21:32:11.301072   14142 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:32:11.301355   14142 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:32:11.301649   14142 main.go:141] libmachine: Using API Version  1
	I0604 21:32:11.301666   14142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:32:11.302049   14142 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:32:11.302124   14142 main.go:141] libmachine: Using API Version  1
	I0604 21:32:11.302143   14142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:32:11.302751   14142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:32:11.302797   14142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:32:11.304609   14142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42777
	I0604 21:32:11.304754   14142 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:32:11.305055   14142 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:32:11.305412   14142 main.go:141] libmachine: Using API Version  1
	I0604 21:32:11.305433   14142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:32:11.305707   14142 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:32:11.305777   14142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:32:11.305806   14142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:32:11.305865   14142 main.go:141] libmachine: (addons-450158) Calling .GetState
	I0604 21:32:11.307643   14142 host.go:66] Checking if "addons-450158" exists ...
	I0604 21:32:11.311271   14142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43951
	I0604 21:32:11.312914   14142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:32:11.312954   14142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:32:11.313042   14142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:32:11.313043   14142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:32:11.313069   14142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:32:11.313069   14142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:32:11.313404   14142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:32:11.313428   14142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:32:11.324144   14142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44939
	I0604 21:32:11.324150   14142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45713
	I0604 21:32:11.324666   14142 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:32:11.324753   14142 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:32:11.325034   14142 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:32:11.325386   14142 main.go:141] libmachine: Using API Version  1
	I0604 21:32:11.325401   14142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:32:11.325531   14142 main.go:141] libmachine: Using API Version  1
	I0604 21:32:11.325543   14142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:32:11.325776   14142 main.go:141] libmachine: Using API Version  1
	I0604 21:32:11.325793   14142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:32:11.326203   14142 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:32:11.326791   14142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:32:11.326824   14142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:32:11.326994   14142 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:32:11.327101   14142 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:32:11.327569   14142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:32:11.327590   14142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:32:11.327698   14142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:32:11.327727   14142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:32:11.328967   14142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42761
	I0604 21:32:11.334698   14142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45677
	I0604 21:32:11.335022   14142 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:32:11.335509   14142 main.go:141] libmachine: Using API Version  1
	I0604 21:32:11.335525   14142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:32:11.335962   14142 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:32:11.336142   14142 main.go:141] libmachine: (addons-450158) Calling .GetState
	I0604 21:32:11.336735   14142 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:32:11.337615   14142 main.go:141] libmachine: Using API Version  1
	I0604 21:32:11.337639   14142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:32:11.337972   14142 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:32:11.338222   14142 main.go:141] libmachine: (addons-450158) Calling .GetState
	I0604 21:32:11.338351   14142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44377
	I0604 21:32:11.338742   14142 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:32:11.339272   14142 main.go:141] libmachine: Using API Version  1
	I0604 21:32:11.339285   14142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:32:11.339606   14142 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:32:11.340193   14142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:32:11.340240   14142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:32:11.340840   14142 addons.go:234] Setting addon default-storageclass=true in "addons-450158"
	I0604 21:32:11.340878   14142 host.go:66] Checking if "addons-450158" exists ...
	I0604 21:32:11.341235   14142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:32:11.341270   14142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:32:11.341838   14142 main.go:141] libmachine: (addons-450158) Calling .DriverName
	I0604 21:32:11.344569   14142 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0604 21:32:11.346055   14142 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0604 21:32:11.347470   14142 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0604 21:32:11.348831   14142 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0604 21:32:11.350190   14142 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0604 21:32:11.351388   14142 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0604 21:32:11.352630   14142 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0604 21:32:11.353843   14142 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0604 21:32:11.354998   14142 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0604 21:32:11.355014   14142 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0604 21:32:11.355036   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHHostname
	I0604 21:32:11.358499   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:32:11.359082   14142 main.go:141] libmachine: (addons-450158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:57:1f", ip: ""} in network mk-addons-450158: {Iface:virbr1 ExpiryTime:2024-06-04 22:31:29 +0000 UTC Type:0 Mac:52:54:00:c2:57:1f Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-450158 Clientid:01:52:54:00:c2:57:1f}
	I0604 21:32:11.359105   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined IP address 192.168.39.110 and MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:32:11.359300   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHPort
	I0604 21:32:11.359460   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHKeyPath
	I0604 21:32:11.359594   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHUsername
	I0604 21:32:11.359736   14142 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19024-5817/.minikube/machines/addons-450158/id_rsa Username:docker}
	I0604 21:32:11.362433   14142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41101
	I0604 21:32:11.365643   14142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38689
	I0604 21:32:11.366650   14142 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:32:11.367230   14142 main.go:141] libmachine: Using API Version  1
	I0604 21:32:11.367242   14142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:32:11.367321   14142 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:32:11.367582   14142 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:32:11.367939   14142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:32:11.367963   14142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:32:11.368202   14142 main.go:141] libmachine: Using API Version  1
	I0604 21:32:11.368213   14142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:32:11.368843   14142 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:32:11.369392   14142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:32:11.369425   14142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:32:11.369757   14142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44073
	I0604 21:32:11.370204   14142 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:32:11.370550   14142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34881
	I0604 21:32:11.370786   14142 main.go:141] libmachine: Using API Version  1
	I0604 21:32:11.370798   14142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:32:11.371135   14142 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:32:11.371200   14142 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:32:11.371787   14142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:32:11.371812   14142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:32:11.372179   14142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35701
	I0604 21:32:11.372570   14142 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:32:11.373326   14142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43721
	I0604 21:32:11.373794   14142 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:32:11.374283   14142 main.go:141] libmachine: Using API Version  1
	I0604 21:32:11.374298   14142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:32:11.374607   14142 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:32:11.375097   14142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:32:11.375128   14142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:32:11.375777   14142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36249
	I0604 21:32:11.376212   14142 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:32:11.376723   14142 main.go:141] libmachine: Using API Version  1
	I0604 21:32:11.376741   14142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:32:11.377163   14142 main.go:141] libmachine: Using API Version  1
	I0604 21:32:11.377180   14142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:32:11.377241   14142 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:32:11.377585   14142 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:32:11.378076   14142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:32:11.378106   14142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:32:11.378613   14142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:32:11.378641   14142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:32:11.378850   14142 main.go:141] libmachine: Using API Version  1
	I0604 21:32:11.378864   14142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:32:11.378927   14142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33691
	I0604 21:32:11.379194   14142 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:32:11.379332   14142 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:32:11.379398   14142 main.go:141] libmachine: (addons-450158) Calling .GetState
	I0604 21:32:11.379934   14142 main.go:141] libmachine: Using API Version  1
	I0604 21:32:11.379952   14142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:32:11.380303   14142 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:32:11.380479   14142 main.go:141] libmachine: (addons-450158) Calling .GetState
	I0604 21:32:11.382149   14142 main.go:141] libmachine: (addons-450158) Calling .DriverName
	I0604 21:32:11.382185   14142 main.go:141] libmachine: (addons-450158) Calling .DriverName
	I0604 21:32:11.384219   14142 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0604 21:32:11.385656   14142 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0604 21:32:11.387075   14142 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0604 21:32:11.387095   14142 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0604 21:32:11.387114   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHHostname
	I0604 21:32:11.385630   14142 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0604 21:32:11.387180   14142 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0604 21:32:11.387196   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHHostname
	I0604 21:32:11.390771   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:32:11.391000   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:32:11.391265   14142 main.go:141] libmachine: (addons-450158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:57:1f", ip: ""} in network mk-addons-450158: {Iface:virbr1 ExpiryTime:2024-06-04 22:31:29 +0000 UTC Type:0 Mac:52:54:00:c2:57:1f Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-450158 Clientid:01:52:54:00:c2:57:1f}
	I0604 21:32:11.391285   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined IP address 192.168.39.110 and MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:32:11.391325   14142 main.go:141] libmachine: (addons-450158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:57:1f", ip: ""} in network mk-addons-450158: {Iface:virbr1 ExpiryTime:2024-06-04 22:31:29 +0000 UTC Type:0 Mac:52:54:00:c2:57:1f Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-450158 Clientid:01:52:54:00:c2:57:1f}
	I0604 21:32:11.391337   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined IP address 192.168.39.110 and MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:32:11.391521   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHPort
	I0604 21:32:11.391693   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHKeyPath
	I0604 21:32:11.391872   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHUsername
	I0604 21:32:11.392033   14142 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19024-5817/.minikube/machines/addons-450158/id_rsa Username:docker}
	I0604 21:32:11.392321   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHPort
	I0604 21:32:11.392564   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHKeyPath
	I0604 21:32:11.392688   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHUsername
	I0604 21:32:11.392835   14142 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19024-5817/.minikube/machines/addons-450158/id_rsa Username:docker}
	I0604 21:32:11.399147   14142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33527
	I0604 21:32:11.399504   14142 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:32:11.400015   14142 main.go:141] libmachine: Using API Version  1
	I0604 21:32:11.400030   14142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:32:11.400444   14142 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:32:11.401228   14142 main.go:141] libmachine: (addons-450158) Calling .GetState
	I0604 21:32:11.407865   14142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38871
	I0604 21:32:11.408345   14142 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:32:11.408987   14142 main.go:141] libmachine: Using API Version  1
	I0604 21:32:11.409004   14142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:32:11.409447   14142 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:32:11.409660   14142 main.go:141] libmachine: (addons-450158) Calling .GetState
	I0604 21:32:11.410750   14142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41777
	I0604 21:32:11.411082   14142 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:32:11.411570   14142 main.go:141] libmachine: Using API Version  1
	I0604 21:32:11.411585   14142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:32:11.411643   14142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46255
	I0604 21:32:11.412098   14142 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:32:11.412156   14142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34825
	I0604 21:32:11.412956   14142 main.go:141] libmachine: (addons-450158) Calling .DriverName
	I0604 21:32:11.413150   14142 main.go:141] libmachine: (addons-450158) Calling .GetState
	I0604 21:32:11.414735   14142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40235
	I0604 21:32:11.414745   14142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46093
	I0604 21:32:11.416465   14142 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0604 21:32:11.415131   14142 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:32:11.415185   14142 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:32:11.415307   14142 main.go:141] libmachine: (addons-450158) Calling .DriverName
	I0604 21:32:11.415337   14142 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:32:11.415920   14142 main.go:141] libmachine: (addons-450158) Calling .DriverName
	I0604 21:32:11.416232   14142 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:32:11.416766   14142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46649
	I0604 21:32:11.417589   14142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40481
	I0604 21:32:11.417990   14142 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0604 21:32:11.418003   14142 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0604 21:32:11.418021   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHHostname
	I0604 21:32:11.419369   14142 main.go:141] libmachine: Using API Version  1
	I0604 21:32:11.419388   14142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:32:11.419432   14142 main.go:141] libmachine: Using API Version  1
	I0604 21:32:11.421398   14142 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0604 21:32:11.421412   14142 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:32:11.422662   14142 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0604 21:32:11.422678   14142 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0604 21:32:11.422697   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHHostname
	I0604 21:32:11.419446   14142 main.go:141] libmachine: Using API Version  1
	I0604 21:32:11.422746   14142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:32:11.424222   14142 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0604 21:32:11.419522   14142 main.go:141] libmachine: Using API Version  1
	I0604 21:32:11.419879   14142 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:32:11.420820   14142 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:32:11.421127   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:32:11.419446   14142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:32:11.421698   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHPort
	I0604 21:32:11.421841   14142 main.go:141] libmachine: Using API Version  1
	I0604 21:32:11.421862   14142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41113
	I0604 21:32:11.422230   14142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38613
	I0604 21:32:11.423154   14142 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:32:11.424586   14142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43429
	I0604 21:32:11.425535   14142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:32:11.425587   14142 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0604 21:32:11.425597   14142 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0604 21:32:11.425611   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHHostname
	I0604 21:32:11.425645   14142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:32:11.425691   14142 main.go:141] libmachine: (addons-450158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:57:1f", ip: ""} in network mk-addons-450158: {Iface:virbr1 ExpiryTime:2024-06-04 22:31:29 +0000 UTC Type:0 Mac:52:54:00:c2:57:1f Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-450158 Clientid:01:52:54:00:c2:57:1f}
	I0604 21:32:11.425714   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined IP address 192.168.39.110 and MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:32:11.427163   14142 main.go:141] libmachine: (addons-450158) Calling .GetState
	I0604 21:32:11.427245   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:32:11.427266   14142 main.go:141] libmachine: (addons-450158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:57:1f", ip: ""} in network mk-addons-450158: {Iface:virbr1 ExpiryTime:2024-06-04 22:31:29 +0000 UTC Type:0 Mac:52:54:00:c2:57:1f Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-450158 Clientid:01:52:54:00:c2:57:1f}
	I0604 21:32:11.427265   14142 main.go:141] libmachine: Using API Version  1
	I0604 21:32:11.427282   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined IP address 192.168.39.110 and MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:32:11.427283   14142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:32:11.427308   14142 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:32:11.427362   14142 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:32:11.427366   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHKeyPath
	I0604 21:32:11.427402   14142 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:32:11.427418   14142 main.go:141] libmachine: (addons-450158) Calling .GetState
	I0604 21:32:11.427663   14142 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:32:11.427744   14142 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:32:11.427842   14142 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:32:11.427928   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHUsername
	I0604 21:32:11.427977   14142 main.go:141] libmachine: (addons-450158) Calling .GetState
	I0604 21:32:11.428092   14142 main.go:141] libmachine: Using API Version  1
	I0604 21:32:11.428103   14142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:32:11.428149   14142 main.go:141] libmachine: (addons-450158) Calling .GetState
	I0604 21:32:11.428169   14142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:32:11.428199   14142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:32:11.429127   14142 main.go:141] libmachine: (addons-450158) Calling .GetState
	I0604 21:32:11.429188   14142 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19024-5817/.minikube/machines/addons-450158/id_rsa Username:docker}
	I0604 21:32:11.429505   14142 main.go:141] libmachine: Using API Version  1
	I0604 21:32:11.429518   14142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:32:11.429677   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:32:11.430308   14142 main.go:141] libmachine: (addons-450158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:57:1f", ip: ""} in network mk-addons-450158: {Iface:virbr1 ExpiryTime:2024-06-04 22:31:29 +0000 UTC Type:0 Mac:52:54:00:c2:57:1f Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-450158 Clientid:01:52:54:00:c2:57:1f}
	I0604 21:32:11.430328   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined IP address 192.168.39.110 and MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:32:11.430402   14142 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:32:11.430686   14142 main.go:141] libmachine: (addons-450158) Calling .DriverName
	I0604 21:32:11.430851   14142 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:32:11.431338   14142 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-450158"
	I0604 21:32:11.431374   14142 host.go:66] Checking if "addons-450158" exists ...
	I0604 21:32:11.431742   14142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:32:11.431775   14142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:32:11.432230   14142 main.go:141] libmachine: (addons-450158) Calling .DriverName
	I0604 21:32:11.432278   14142 main.go:141] libmachine: (addons-450158) Calling .DriverName
	I0604 21:32:11.432283   14142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:32:11.432301   14142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:32:11.432318   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHPort
	I0604 21:32:11.432464   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHKeyPath
	I0604 21:32:11.434070   14142 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0604 21:32:11.432555   14142 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0604 21:32:11.432728   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHPort
	I0604 21:32:11.432964   14142 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:32:11.433007   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHUsername
	I0604 21:32:11.433282   14142 main.go:141] libmachine: (addons-450158) Calling .DriverName
	I0604 21:32:11.433598   14142 main.go:141] libmachine: (addons-450158) Calling .DriverName
	I0604 21:32:11.435453   14142 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0604 21:32:11.435474   14142 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0604 21:32:11.435486   14142 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0604 21:32:11.435503   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHHostname
	I0604 21:32:11.435475   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHHostname
	I0604 21:32:11.435662   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHKeyPath
	I0604 21:32:11.437042   14142 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.7.0
	I0604 21:32:11.435925   14142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41489
	I0604 21:32:11.435945   14142 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19024-5817/.minikube/machines/addons-450158/id_rsa Username:docker}
	I0604 21:32:11.436586   14142 main.go:141] libmachine: Using API Version  1
	I0604 21:32:11.438529   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:32:11.438555   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHUsername
	I0604 21:32:11.440162   14142 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.7.0
	I0604 21:32:11.438811   14142 main.go:141] libmachine: (addons-450158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:57:1f", ip: ""} in network mk-addons-450158: {Iface:virbr1 ExpiryTime:2024-06-04 22:31:29 +0000 UTC Type:0 Mac:52:54:00:c2:57:1f Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-450158 Clientid:01:52:54:00:c2:57:1f}
	I0604 21:32:11.438819   14142 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.29.0
	I0604 21:32:11.438900   14142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:32:11.439051   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHPort
	I0604 21:32:11.439241   14142 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19024-5817/.minikube/machines/addons-450158/id_rsa Username:docker}
	I0604 21:32:11.440779   14142 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:32:11.442539   14142 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0604 21:32:11.442555   14142 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0604 21:32:11.442572   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHHostname
	I0604 21:32:11.441144   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:32:11.442627   14142 main.go:141] libmachine: (addons-450158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:57:1f", ip: ""} in network mk-addons-450158: {Iface:virbr1 ExpiryTime:2024-06-04 22:31:29 +0000 UTC Type:0 Mac:52:54:00:c2:57:1f Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-450158 Clientid:01:52:54:00:c2:57:1f}
	I0604 21:32:11.442644   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined IP address 192.168.39.110 and MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:32:11.443914   14142 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.7.0
	I0604 21:32:11.441792   14142 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:32:11.441809   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHPort
	I0604 21:32:11.441821   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined IP address 192.168.39.110 and MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:32:11.441878   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHKeyPath
	I0604 21:32:11.442389   14142 main.go:141] libmachine: Using API Version  1
	I0604 21:32:11.445268   14142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:32:11.445495   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHUsername
	I0604 21:32:11.445543   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:32:11.445680   14142 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19024-5817/.minikube/machines/addons-450158/id_rsa Username:docker}
	I0604 21:32:11.445957   14142 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:32:11.446031   14142 main.go:141] libmachine: (addons-450158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:57:1f", ip: ""} in network mk-addons-450158: {Iface:virbr1 ExpiryTime:2024-06-04 22:31:29 +0000 UTC Type:0 Mac:52:54:00:c2:57:1f Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-450158 Clientid:01:52:54:00:c2:57:1f}
	I0604 21:32:11.446045   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined IP address 192.168.39.110 and MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:32:11.446072   14142 main.go:141] libmachine: (addons-450158) Calling .GetState
	I0604 21:32:11.446113   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHKeyPath
	I0604 21:32:11.446241   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHPort
	I0604 21:32:11.446298   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHUsername
	I0604 21:32:11.446374   14142 main.go:141] libmachine: (addons-450158) Calling .GetState
	I0604 21:32:11.446417   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHKeyPath
	I0604 21:32:11.446494   14142 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19024-5817/.minikube/machines/addons-450158/id_rsa Username:docker}
	I0604 21:32:11.446961   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHUsername
	I0604 21:32:11.447103   14142 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19024-5817/.minikube/machines/addons-450158/id_rsa Username:docker}
	I0604 21:32:11.447868   14142 main.go:141] libmachine: (addons-450158) Calling .DriverName
	I0604 21:32:11.449555   14142 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0604 21:32:11.448599   14142 main.go:141] libmachine: (addons-450158) Calling .DriverName
	I0604 21:32:11.450956   14142 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0604 21:32:11.450967   14142 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0604 21:32:11.450991   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHHostname
	I0604 21:32:11.452429   14142 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0604 21:32:11.453533   14142 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0604 21:32:11.453552   14142 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0604 21:32:11.453568   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHHostname
	I0604 21:32:11.454337   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:32:11.454348   14142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35439
	I0604 21:32:11.454685   14142 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:32:11.455018   14142 main.go:141] libmachine: (addons-450158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:57:1f", ip: ""} in network mk-addons-450158: {Iface:virbr1 ExpiryTime:2024-06-04 22:31:29 +0000 UTC Type:0 Mac:52:54:00:c2:57:1f Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-450158 Clientid:01:52:54:00:c2:57:1f}
	I0604 21:32:11.455055   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined IP address 192.168.39.110 and MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:32:11.455113   14142 main.go:141] libmachine: Using API Version  1
	I0604 21:32:11.455128   14142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:32:11.455289   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHPort
	I0604 21:32:11.455431   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHKeyPath
	I0604 21:32:11.455462   14142 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:32:11.455558   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHUsername
	I0604 21:32:11.455609   14142 main.go:141] libmachine: (addons-450158) Calling .GetState
	I0604 21:32:11.455765   14142 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19024-5817/.minikube/machines/addons-450158/id_rsa Username:docker}
	I0604 21:32:11.457005   14142 main.go:141] libmachine: (addons-450158) Calling .DriverName
	I0604 21:32:11.458719   14142 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0604 21:32:11.459936   14142 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0604 21:32:11.458406   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:32:11.458958   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHPort
	I0604 21:32:11.462616   14142 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0604 21:32:11.461450   14142 main.go:141] libmachine: (addons-450158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:57:1f", ip: ""} in network mk-addons-450158: {Iface:virbr1 ExpiryTime:2024-06-04 22:31:29 +0000 UTC Type:0 Mac:52:54:00:c2:57:1f Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-450158 Clientid:01:52:54:00:c2:57:1f}
	I0604 21:32:11.461906   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHKeyPath
	I0604 21:32:11.462647   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined IP address 192.168.39.110 and MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:32:11.464737   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHUsername
	I0604 21:32:11.464974   14142 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0604 21:32:11.464988   14142 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0604 21:32:11.465010   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHHostname
	I0604 21:32:11.467542   14142 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19024-5817/.minikube/machines/addons-450158/id_rsa Username:docker}
	I0604 21:32:11.468747   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:32:11.469007   14142 main.go:141] libmachine: (addons-450158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:57:1f", ip: ""} in network mk-addons-450158: {Iface:virbr1 ExpiryTime:2024-06-04 22:31:29 +0000 UTC Type:0 Mac:52:54:00:c2:57:1f Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-450158 Clientid:01:52:54:00:c2:57:1f}
	I0604 21:32:11.469041   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined IP address 192.168.39.110 and MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:32:11.469158   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHPort
	I0604 21:32:11.469317   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHKeyPath
	I0604 21:32:11.469467   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHUsername
	I0604 21:32:11.469573   14142 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19024-5817/.minikube/machines/addons-450158/id_rsa Username:docker}
	I0604 21:32:11.470438   14142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38603
	I0604 21:32:11.470961   14142 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:32:11.471935   14142 main.go:141] libmachine: Using API Version  1
	I0604 21:32:11.471948   14142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:32:11.472609   14142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42949
	I0604 21:32:11.472649   14142 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:32:11.473636   14142 main.go:141] libmachine: (addons-450158) Calling .GetState
	I0604 21:32:11.473737   14142 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0604 21:32:11.473750   14142 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (626760 bytes)
	I0604 21:32:11.473765   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHHostname
	I0604 21:32:11.475304   14142 main.go:141] libmachine: (addons-450158) Calling .DriverName
	I0604 21:32:11.477053   14142 out.go:177]   - Using image docker.io/registry:2.8.3
	I0604 21:32:11.476849   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:32:11.477354   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHPort
	I0604 21:32:11.478319   14142 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0604 21:32:11.478385   14142 main.go:141] libmachine: (addons-450158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:57:1f", ip: ""} in network mk-addons-450158: {Iface:virbr1 ExpiryTime:2024-06-04 22:31:29 +0000 UTC Type:0 Mac:52:54:00:c2:57:1f Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-450158 Clientid:01:52:54:00:c2:57:1f}
	I0604 21:32:11.479415   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined IP address 192.168.39.110 and MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:32:11.478485   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHKeyPath
	I0604 21:32:11.479520   14142 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0604 21:32:11.479537   14142 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0604 21:32:11.479568   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHHostname
	I0604 21:32:11.480158   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHUsername
	I0604 21:32:11.480399   14142 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19024-5817/.minikube/machines/addons-450158/id_rsa Username:docker}
	I0604 21:32:11.482211   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:32:11.482569   14142 main.go:141] libmachine: (addons-450158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:57:1f", ip: ""} in network mk-addons-450158: {Iface:virbr1 ExpiryTime:2024-06-04 22:31:29 +0000 UTC Type:0 Mac:52:54:00:c2:57:1f Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-450158 Clientid:01:52:54:00:c2:57:1f}
	I0604 21:32:11.482601   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined IP address 192.168.39.110 and MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:32:11.482812   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHPort
	I0604 21:32:11.482985   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHKeyPath
	I0604 21:32:11.483267   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHUsername
	I0604 21:32:11.483443   14142 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19024-5817/.minikube/machines/addons-450158/id_rsa Username:docker}
	I0604 21:32:11.493173   14142 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:32:11.493748   14142 main.go:141] libmachine: Using API Version  1
	I0604 21:32:11.493779   14142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:32:11.494085   14142 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:32:11.494481   14142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:32:11.494502   14142 main.go:141] libmachine: Launching plugin server for driver kvm2
	W0604 21:32:11.496003   14142 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:52332->192.168.39.110:22: read: connection reset by peer
	I0604 21:32:11.496041   14142 retry.go:31] will retry after 152.266141ms: ssh: handshake failed: read tcp 192.168.39.1:52332->192.168.39.110:22: read: connection reset by peer
	W0604 21:32:11.496100   14142 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:52344->192.168.39.110:22: read: connection reset by peer
	I0604 21:32:11.496116   14142 retry.go:31] will retry after 248.324698ms: ssh: handshake failed: read tcp 192.168.39.1:52344->192.168.39.110:22: read: connection reset by peer
	I0604 21:32:11.509954   14142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37143
	I0604 21:32:11.510309   14142 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:32:11.510736   14142 main.go:141] libmachine: Using API Version  1
	I0604 21:32:11.510754   14142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:32:11.511121   14142 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:32:11.511312   14142 main.go:141] libmachine: (addons-450158) Calling .GetState
	I0604 21:32:11.512498   14142 main.go:141] libmachine: (addons-450158) Calling .DriverName
	I0604 21:32:11.514514   14142 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0604 21:32:11.516028   14142 out.go:177]   - Using image docker.io/busybox:stable
	I0604 21:32:11.517330   14142 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0604 21:32:11.517347   14142 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0604 21:32:11.517359   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHHostname
	I0604 21:32:11.520324   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:32:11.520723   14142 main.go:141] libmachine: (addons-450158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:57:1f", ip: ""} in network mk-addons-450158: {Iface:virbr1 ExpiryTime:2024-06-04 22:31:29 +0000 UTC Type:0 Mac:52:54:00:c2:57:1f Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-450158 Clientid:01:52:54:00:c2:57:1f}
	I0604 21:32:11.520756   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined IP address 192.168.39.110 and MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:32:11.520878   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHPort
	I0604 21:32:11.521041   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHKeyPath
	I0604 21:32:11.521174   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHUsername
	I0604 21:32:11.521313   14142 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19024-5817/.minikube/machines/addons-450158/id_rsa Username:docker}
	W0604 21:32:11.521932   14142 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:52346->192.168.39.110:22: read: connection reset by peer
	I0604 21:32:11.521949   14142 retry.go:31] will retry after 325.587801ms: ssh: handshake failed: read tcp 192.168.39.1:52346->192.168.39.110:22: read: connection reset by peer
	I0604 21:32:11.720876   14142 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0604 21:32:11.729355   14142 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0604 21:32:11.729377   14142 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0604 21:32:11.814769   14142 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0604 21:32:11.869198   14142 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0604 21:32:11.869264   14142 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0604 21:32:11.949641   14142 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0604 21:32:11.962363   14142 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0604 21:32:11.962388   14142 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0604 21:32:11.970565   14142 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0604 21:32:11.970623   14142 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0604 21:32:12.006424   14142 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0604 21:32:12.008993   14142 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0604 21:32:12.009024   14142 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0604 21:32:12.039942   14142 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0604 21:32:12.041186   14142 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0604 21:32:12.041207   14142 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0604 21:32:12.066347   14142 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0604 21:32:12.066371   14142 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0604 21:32:12.113205   14142 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0604 21:32:12.133550   14142 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0604 21:32:12.133573   14142 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0604 21:32:12.169447   14142 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0604 21:32:12.185023   14142 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0604 21:32:12.185047   14142 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0604 21:32:12.207492   14142 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0604 21:32:12.207518   14142 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0604 21:32:12.253876   14142 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0604 21:32:12.253902   14142 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0604 21:32:12.267095   14142 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0604 21:32:12.267120   14142 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0604 21:32:12.268313   14142 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0604 21:32:12.268329   14142 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0604 21:32:12.299723   14142 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0604 21:32:12.299754   14142 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0604 21:32:12.306906   14142 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0604 21:32:12.306929   14142 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0604 21:32:12.386359   14142 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0604 21:32:12.391203   14142 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0604 21:32:12.391226   14142 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0604 21:32:12.414710   14142 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0604 21:32:12.414742   14142 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0604 21:32:12.552989   14142 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0604 21:32:12.553013   14142 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0604 21:32:12.602064   14142 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0604 21:32:12.616612   14142 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0604 21:32:12.616635   14142 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0604 21:32:12.638217   14142 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0604 21:32:12.638241   14142 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0604 21:32:12.645612   14142 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0604 21:32:12.646557   14142 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0604 21:32:12.646571   14142 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0604 21:32:12.749239   14142 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0604 21:32:12.749260   14142 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0604 21:32:12.758757   14142 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0604 21:32:12.758774   14142 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0604 21:32:13.058017   14142 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0604 21:32:13.058039   14142 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0604 21:32:13.128223   14142 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0604 21:32:13.176909   14142 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0604 21:32:13.176934   14142 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0604 21:32:13.260344   14142 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0604 21:32:13.260364   14142 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0604 21:32:13.287289   14142 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0604 21:32:13.287308   14142 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0604 21:32:13.567006   14142 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0604 21:32:13.567037   14142 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0604 21:32:13.614174   14142 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0604 21:32:13.904403   14142 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0604 21:32:13.904433   14142 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0604 21:32:13.994478   14142 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.273566943s)
	I0604 21:32:13.994520   14142 main.go:141] libmachine: Making call to close driver server
	I0604 21:32:13.994529   14142 main.go:141] libmachine: (addons-450158) Calling .Close
	I0604 21:32:13.994877   14142 main.go:141] libmachine: Successfully made call to close driver server
	I0604 21:32:13.994897   14142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0604 21:32:13.994906   14142 main.go:141] libmachine: Making call to close driver server
	I0604 21:32:13.994908   14142 main.go:141] libmachine: (addons-450158) DBG | Closing plugin on server side
	I0604 21:32:13.994914   14142 main.go:141] libmachine: (addons-450158) Calling .Close
	I0604 21:32:13.995139   14142 main.go:141] libmachine: (addons-450158) DBG | Closing plugin on server side
	I0604 21:32:13.995165   14142 main.go:141] libmachine: Successfully made call to close driver server
	I0604 21:32:13.995172   14142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0604 21:32:14.071477   14142 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0604 21:32:14.076115   14142 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0604 21:32:14.076145   14142 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0604 21:32:14.166692   14142 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0604 21:32:14.166719   14142 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0604 21:32:14.311872   14142 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0604 21:32:14.311900   14142 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0604 21:32:14.600386   14142 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0604 21:32:14.600410   14142 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0604 21:32:14.796840   14142 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0604 21:32:14.939359   14142 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0604 21:32:14.939386   14142 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0604 21:32:15.025083   14142 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.210276432s)
	I0604 21:32:15.025111   14142 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.155878011s)
	I0604 21:32:15.025132   14142 main.go:141] libmachine: Making call to close driver server
	I0604 21:32:15.025143   14142 main.go:141] libmachine: (addons-450158) Calling .Close
	I0604 21:32:15.025199   14142 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.15590686s)
	I0604 21:32:15.025224   14142 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0604 21:32:15.025472   14142 main.go:141] libmachine: Successfully made call to close driver server
	I0604 21:32:15.025496   14142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0604 21:32:15.025511   14142 main.go:141] libmachine: Making call to close driver server
	I0604 21:32:15.025523   14142 main.go:141] libmachine: (addons-450158) Calling .Close
	I0604 21:32:15.025750   14142 main.go:141] libmachine: Successfully made call to close driver server
	I0604 21:32:15.025766   14142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0604 21:32:15.026114   14142 node_ready.go:35] waiting up to 6m0s for node "addons-450158" to be "Ready" ...
	I0604 21:32:15.031669   14142 node_ready.go:49] node "addons-450158" has status "Ready":"True"
	I0604 21:32:15.031693   14142 node_ready.go:38] duration metric: took 5.556017ms for node "addons-450158" to be "Ready" ...
	I0604 21:32:15.031705   14142 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0604 21:32:15.052198   14142 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7g9p9" in "kube-system" namespace to be "Ready" ...
	I0604 21:32:15.174724   14142 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0604 21:32:15.529166   14142 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-450158" context rescaled to 1 replicas
	I0604 21:32:17.077331   14142 pod_ready.go:102] pod "coredns-7db6d8ff4d-7g9p9" in "kube-system" namespace has status "Ready":"False"
	I0604 21:32:17.188860   14142 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.239178115s)
	I0604 21:32:17.188886   14142 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.182428033s)
	I0604 21:32:17.188904   14142 main.go:141] libmachine: Making call to close driver server
	I0604 21:32:17.188916   14142 main.go:141] libmachine: (addons-450158) Calling .Close
	I0604 21:32:17.188937   14142 main.go:141] libmachine: Making call to close driver server
	I0604 21:32:17.188948   14142 main.go:141] libmachine: (addons-450158) Calling .Close
	I0604 21:32:17.189219   14142 main.go:141] libmachine: Successfully made call to close driver server
	I0604 21:32:17.189242   14142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0604 21:32:17.189252   14142 main.go:141] libmachine: Making call to close driver server
	I0604 21:32:17.189260   14142 main.go:141] libmachine: (addons-450158) Calling .Close
	I0604 21:32:17.189293   14142 main.go:141] libmachine: (addons-450158) DBG | Closing plugin on server side
	I0604 21:32:17.189332   14142 main.go:141] libmachine: Successfully made call to close driver server
	I0604 21:32:17.189343   14142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0604 21:32:17.189357   14142 main.go:141] libmachine: Making call to close driver server
	I0604 21:32:17.189368   14142 main.go:141] libmachine: (addons-450158) Calling .Close
	I0604 21:32:17.189571   14142 main.go:141] libmachine: Successfully made call to close driver server
	I0604 21:32:17.189585   14142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0604 21:32:17.189674   14142 main.go:141] libmachine: Successfully made call to close driver server
	I0604 21:32:17.189692   14142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0604 21:32:18.447259   14142 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0604 21:32:18.447305   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHHostname
	I0604 21:32:18.450740   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:32:18.451140   14142 main.go:141] libmachine: (addons-450158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:57:1f", ip: ""} in network mk-addons-450158: {Iface:virbr1 ExpiryTime:2024-06-04 22:31:29 +0000 UTC Type:0 Mac:52:54:00:c2:57:1f Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-450158 Clientid:01:52:54:00:c2:57:1f}
	I0604 21:32:18.451172   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined IP address 192.168.39.110 and MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:32:18.451320   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHPort
	I0604 21:32:18.451557   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHKeyPath
	I0604 21:32:18.451728   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHUsername
	I0604 21:32:18.451907   14142 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19024-5817/.minikube/machines/addons-450158/id_rsa Username:docker}
	I0604 21:32:19.069783   14142 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0604 21:32:19.089259   14142 pod_ready.go:102] pod "coredns-7db6d8ff4d-7g9p9" in "kube-system" namespace has status "Ready":"False"
	I0604 21:32:19.342946   14142 addons.go:234] Setting addon gcp-auth=true in "addons-450158"
	I0604 21:32:19.342994   14142 host.go:66] Checking if "addons-450158" exists ...
	I0604 21:32:19.343319   14142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:32:19.343348   14142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:32:19.357776   14142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44145
	I0604 21:32:19.358185   14142 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:32:19.358659   14142 main.go:141] libmachine: Using API Version  1
	I0604 21:32:19.358680   14142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:32:19.359008   14142 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:32:19.359582   14142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:32:19.359616   14142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:32:19.375730   14142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37175
	I0604 21:32:19.376241   14142 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:32:19.376760   14142 main.go:141] libmachine: Using API Version  1
	I0604 21:32:19.376780   14142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:32:19.377135   14142 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:32:19.377360   14142 main.go:141] libmachine: (addons-450158) Calling .GetState
	I0604 21:32:19.379070   14142 main.go:141] libmachine: (addons-450158) Calling .DriverName
	I0604 21:32:19.379266   14142 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0604 21:32:19.379284   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHHostname
	I0604 21:32:19.382073   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:32:19.382475   14142 main.go:141] libmachine: (addons-450158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:57:1f", ip: ""} in network mk-addons-450158: {Iface:virbr1 ExpiryTime:2024-06-04 22:31:29 +0000 UTC Type:0 Mac:52:54:00:c2:57:1f Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-450158 Clientid:01:52:54:00:c2:57:1f}
	I0604 21:32:19.382496   14142 main.go:141] libmachine: (addons-450158) DBG | domain addons-450158 has defined IP address 192.168.39.110 and MAC address 52:54:00:c2:57:1f in network mk-addons-450158
	I0604 21:32:19.382683   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHPort
	I0604 21:32:19.382858   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHKeyPath
	I0604 21:32:19.383001   14142 main.go:141] libmachine: (addons-450158) Calling .GetSSHUsername
	I0604 21:32:19.383141   14142 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19024-5817/.minikube/machines/addons-450158/id_rsa Username:docker}
	I0604 21:32:20.808472   14142 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.768501933s)
	I0604 21:32:20.808537   14142 main.go:141] libmachine: Making call to close driver server
	I0604 21:32:20.808546   14142 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.695304152s)
	I0604 21:32:20.808551   14142 main.go:141] libmachine: (addons-450158) Calling .Close
	I0604 21:32:20.808597   14142 main.go:141] libmachine: Making call to close driver server
	I0604 21:32:20.808671   14142 main.go:141] libmachine: (addons-450158) Calling .Close
	I0604 21:32:20.808904   14142 main.go:141] libmachine: (addons-450158) DBG | Closing plugin on server side
	I0604 21:32:20.808939   14142 main.go:141] libmachine: (addons-450158) DBG | Closing plugin on server side
	I0604 21:32:20.809022   14142 main.go:141] libmachine: Successfully made call to close driver server
	I0604 21:32:20.809039   14142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0604 21:32:20.809052   14142 main.go:141] libmachine: Making call to close driver server
	I0604 21:32:20.809065   14142 main.go:141] libmachine: (addons-450158) Calling .Close
	I0604 21:32:20.809087   14142 main.go:141] libmachine: Successfully made call to close driver server
	I0604 21:32:20.809108   14142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0604 21:32:20.809126   14142 main.go:141] libmachine: Making call to close driver server
	I0604 21:32:20.809136   14142 main.go:141] libmachine: (addons-450158) Calling .Close
	I0604 21:32:20.809257   14142 main.go:141] libmachine: (addons-450158) DBG | Closing plugin on server side
	I0604 21:32:20.809316   14142 main.go:141] libmachine: Successfully made call to close driver server
	I0604 21:32:20.809327   14142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0604 21:32:20.809333   14142 main.go:141] libmachine: Successfully made call to close driver server
	I0604 21:32:20.809344   14142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0604 21:32:20.809354   14142 addons.go:475] Verifying addon ingress=true in "addons-450158"
	I0604 21:32:20.810993   14142 out.go:177] * Verifying ingress addon...
	I0604 21:32:20.812615   14142 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0604 21:32:20.824352   14142 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0604 21:32:20.824369   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:20.881919   14142 main.go:141] libmachine: Making call to close driver server
	I0604 21:32:20.881938   14142 main.go:141] libmachine: (addons-450158) Calling .Close
	I0604 21:32:20.882338   14142 main.go:141] libmachine: (addons-450158) DBG | Closing plugin on server side
	I0604 21:32:20.882378   14142 main.go:141] libmachine: Successfully made call to close driver server
	I0604 21:32:20.882387   14142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0604 21:32:21.320293   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:21.597994   14142 pod_ready.go:102] pod "coredns-7db6d8ff4d-7g9p9" in "kube-system" namespace has status "Ready":"False"
	I0604 21:32:21.891684   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:22.380132   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:22.955130   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:23.325133   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:23.341054   14142 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (10.9546231s)
	I0604 21:32:23.341078   14142 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.171596906s)
	I0604 21:32:23.341108   14142 main.go:141] libmachine: Making call to close driver server
	I0604 21:32:23.341109   14142 main.go:141] libmachine: Making call to close driver server
	I0604 21:32:23.341119   14142 main.go:141] libmachine: (addons-450158) Calling .Close
	I0604 21:32:23.341123   14142 main.go:141] libmachine: (addons-450158) Calling .Close
	I0604 21:32:23.341138   14142 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (10.738984931s)
	I0604 21:32:23.341145   14142 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.695503703s)
	I0604 21:32:23.341166   14142 main.go:141] libmachine: Making call to close driver server
	I0604 21:32:23.341177   14142 main.go:141] libmachine: (addons-450158) Calling .Close
	I0604 21:32:23.341209   14142 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.212962956s)
	I0604 21:32:23.341228   14142 main.go:141] libmachine: Making call to close driver server
	I0604 21:32:23.341245   14142 main.go:141] libmachine: (addons-450158) Calling .Close
	I0604 21:32:23.341250   14142 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.727046596s)
	I0604 21:32:23.341270   14142 main.go:141] libmachine: Making call to close driver server
	I0604 21:32:23.341282   14142 main.go:141] libmachine: (addons-450158) Calling .Close
	I0604 21:32:23.341391   14142 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.269877709s)
	I0604 21:32:23.341407   14142 main.go:141] libmachine: Making call to close driver server
	I0604 21:32:23.341423   14142 main.go:141] libmachine: (addons-450158) Calling .Close
	W0604 21:32:23.341421   14142 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0604 21:32:23.341440   14142 retry.go:31] will retry after 150.590978ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0604 21:32:23.341524   14142 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.544648078s)
	I0604 21:32:23.341545   14142 main.go:141] libmachine: Making call to close driver server
	I0604 21:32:23.341555   14142 main.go:141] libmachine: (addons-450158) Calling .Close
	I0604 21:32:23.341789   14142 main.go:141] libmachine: Successfully made call to close driver server
	I0604 21:32:23.341799   14142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0604 21:32:23.341808   14142 main.go:141] libmachine: Making call to close driver server
	I0604 21:32:23.341816   14142 main.go:141] libmachine: (addons-450158) Calling .Close
	I0604 21:32:23.341821   14142 main.go:141] libmachine: (addons-450158) DBG | Closing plugin on server side
	I0604 21:32:23.341844   14142 main.go:141] libmachine: Successfully made call to close driver server
	I0604 21:32:23.341850   14142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0604 21:32:23.341857   14142 main.go:141] libmachine: Making call to close driver server
	I0604 21:32:23.341863   14142 main.go:141] libmachine: (addons-450158) Calling .Close
	I0604 21:32:23.341915   14142 main.go:141] libmachine: (addons-450158) DBG | Closing plugin on server side
	I0604 21:32:23.341938   14142 main.go:141] libmachine: Successfully made call to close driver server
	I0604 21:32:23.341945   14142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0604 21:32:23.341952   14142 main.go:141] libmachine: Making call to close driver server
	I0604 21:32:23.341959   14142 main.go:141] libmachine: (addons-450158) Calling .Close
	I0604 21:32:23.342048   14142 main.go:141] libmachine: (addons-450158) DBG | Closing plugin on server side
	I0604 21:32:23.342073   14142 main.go:141] libmachine: Successfully made call to close driver server
	I0604 21:32:23.342080   14142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0604 21:32:23.342118   14142 main.go:141] libmachine: Making call to close driver server
	I0604 21:32:23.342127   14142 main.go:141] libmachine: (addons-450158) Calling .Close
	I0604 21:32:23.342189   14142 main.go:141] libmachine: (addons-450158) DBG | Closing plugin on server side
	I0604 21:32:23.342250   14142 main.go:141] libmachine: Successfully made call to close driver server
	I0604 21:32:23.342257   14142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0604 21:32:23.342264   14142 main.go:141] libmachine: Making call to close driver server
	I0604 21:32:23.342271   14142 main.go:141] libmachine: (addons-450158) Calling .Close
	I0604 21:32:23.342283   14142 main.go:141] libmachine: (addons-450158) DBG | Closing plugin on server side
	I0604 21:32:23.342303   14142 main.go:141] libmachine: Successfully made call to close driver server
	I0604 21:32:23.342309   14142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0604 21:32:23.342317   14142 addons.go:475] Verifying addon registry=true in "addons-450158"
	I0604 21:32:23.345034   14142 out.go:177] * Verifying registry addon...
	I0604 21:32:23.342548   14142 main.go:141] libmachine: (addons-450158) DBG | Closing plugin on server side
	I0604 21:32:23.342564   14142 main.go:141] libmachine: (addons-450158) DBG | Closing plugin on server side
	I0604 21:32:23.342589   14142 main.go:141] libmachine: Successfully made call to close driver server
	I0604 21:32:23.342590   14142 main.go:141] libmachine: (addons-450158) DBG | Closing plugin on server side
	I0604 21:32:23.342604   14142 main.go:141] libmachine: Successfully made call to close driver server
	I0604 21:32:23.342612   14142 main.go:141] libmachine: Successfully made call to close driver server
	I0604 21:32:23.343444   14142 main.go:141] libmachine: (addons-450158) DBG | Closing plugin on server side
	I0604 21:32:23.343476   14142 main.go:141] libmachine: Successfully made call to close driver server
	I0604 21:32:23.343495   14142 main.go:141] libmachine: (addons-450158) DBG | Closing plugin on server side
	I0604 21:32:23.343511   14142 main.go:141] libmachine: Successfully made call to close driver server
	I0604 21:32:23.343531   14142 main.go:141] libmachine: Successfully made call to close driver server
	I0604 21:32:23.343545   14142 main.go:141] libmachine: (addons-450158) DBG | Closing plugin on server side
	I0604 21:32:23.345117   14142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0604 21:32:23.346487   14142 main.go:141] libmachine: Making call to close driver server
	I0604 21:32:23.345127   14142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0604 21:32:23.346498   14142 main.go:141] libmachine: (addons-450158) Calling .Close
	I0604 21:32:23.346516   14142 main.go:141] libmachine: Making call to close driver server
	I0604 21:32:23.345137   14142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0604 21:32:23.345144   14142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0604 21:32:23.345157   14142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0604 21:32:23.345159   14142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0604 21:32:23.347915   14142 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-450158 service yakd-dashboard -n yakd-dashboard
	
	I0604 21:32:23.346628   14142 main.go:141] libmachine: (addons-450158) Calling .Close
	I0604 21:32:23.346640   14142 addons.go:475] Verifying addon metrics-server=true in "addons-450158"
	I0604 21:32:23.346829   14142 main.go:141] libmachine: (addons-450158) DBG | Closing plugin on server side
	I0604 21:32:23.346884   14142 main.go:141] libmachine: Successfully made call to close driver server
	I0604 21:32:23.347373   14142 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0604 21:32:23.349448   14142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0604 21:32:23.349612   14142 main.go:141] libmachine: Successfully made call to close driver server
	I0604 21:32:23.349627   14142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0604 21:32:23.356971   14142 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0604 21:32:23.356992   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:23.378931   14142 main.go:141] libmachine: Making call to close driver server
	I0604 21:32:23.378944   14142 main.go:141] libmachine: (addons-450158) Calling .Close
	I0604 21:32:23.379203   14142 main.go:141] libmachine: Successfully made call to close driver server
	I0604 21:32:23.379218   14142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0604 21:32:23.379229   14142 main.go:141] libmachine: (addons-450158) DBG | Closing plugin on server side
	I0604 21:32:23.492906   14142 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0604 21:32:23.825718   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:23.858315   14142 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.479027003s)
	I0604 21:32:23.859805   14142 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0604 21:32:23.858326   14142 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.683541552s)
	I0604 21:32:23.859853   14142 main.go:141] libmachine: Making call to close driver server
	I0604 21:32:23.859868   14142 main.go:141] libmachine: (addons-450158) Calling .Close
	I0604 21:32:23.861156   14142 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0604 21:32:23.862321   14142 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0604 21:32:23.862340   14142 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0604 21:32:23.861438   14142 main.go:141] libmachine: Successfully made call to close driver server
	I0604 21:32:23.862425   14142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0604 21:32:23.862434   14142 main.go:141] libmachine: Making call to close driver server
	I0604 21:32:23.862442   14142 main.go:141] libmachine: (addons-450158) Calling .Close
	I0604 21:32:23.861469   14142 main.go:141] libmachine: (addons-450158) DBG | Closing plugin on server side
	I0604 21:32:23.862971   14142 main.go:141] libmachine: Successfully made call to close driver server
	I0604 21:32:23.862995   14142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0604 21:32:23.863006   14142 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-450158"
	I0604 21:32:23.864569   14142 out.go:177] * Verifying csi-hostpath-driver addon...
	I0604 21:32:23.866780   14142 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0604 21:32:23.896844   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:23.910943   14142 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0604 21:32:23.910966   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:23.922358   14142 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0604 21:32:23.922379   14142 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0604 21:32:23.945810   14142 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0604 21:32:23.945833   14142 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0604 21:32:24.037789   14142 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0604 21:32:24.057678   14142 pod_ready.go:102] pod "coredns-7db6d8ff4d-7g9p9" in "kube-system" namespace has status "Ready":"False"
	I0604 21:32:24.319630   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:24.354515   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:24.375955   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:24.817998   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:24.853987   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:24.875381   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:25.207732   14142 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.714787469s)
	I0604 21:32:25.207774   14142 main.go:141] libmachine: Making call to close driver server
	I0604 21:32:25.207783   14142 main.go:141] libmachine: (addons-450158) Calling .Close
	I0604 21:32:25.208059   14142 main.go:141] libmachine: (addons-450158) DBG | Closing plugin on server side
	I0604 21:32:25.208062   14142 main.go:141] libmachine: Successfully made call to close driver server
	I0604 21:32:25.208087   14142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0604 21:32:25.208097   14142 main.go:141] libmachine: Making call to close driver server
	I0604 21:32:25.208105   14142 main.go:141] libmachine: (addons-450158) Calling .Close
	I0604 21:32:25.208299   14142 main.go:141] libmachine: (addons-450158) DBG | Closing plugin on server side
	I0604 21:32:25.208331   14142 main.go:141] libmachine: Successfully made call to close driver server
	I0604 21:32:25.208338   14142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0604 21:32:25.319873   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:25.360748   14142 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.322924312s)
	I0604 21:32:25.360793   14142 main.go:141] libmachine: Making call to close driver server
	I0604 21:32:25.360806   14142 main.go:141] libmachine: (addons-450158) Calling .Close
	I0604 21:32:25.361144   14142 main.go:141] libmachine: Successfully made call to close driver server
	I0604 21:32:25.361167   14142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0604 21:32:25.361177   14142 main.go:141] libmachine: Making call to close driver server
	I0604 21:32:25.361197   14142 main.go:141] libmachine: (addons-450158) DBG | Closing plugin on server side
	I0604 21:32:25.361262   14142 main.go:141] libmachine: (addons-450158) Calling .Close
	I0604 21:32:25.361461   14142 main.go:141] libmachine: Successfully made call to close driver server
	I0604 21:32:25.361478   14142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0604 21:32:25.361488   14142 main.go:141] libmachine: (addons-450158) DBG | Closing plugin on server side
	I0604 21:32:25.363263   14142 addons.go:475] Verifying addon gcp-auth=true in "addons-450158"
	I0604 21:32:25.364785   14142 out.go:177] * Verifying gcp-auth addon...
	I0604 21:32:25.366983   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:25.367091   14142 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0604 21:32:25.372817   14142 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0604 21:32:25.374186   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:25.816594   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:25.854096   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:25.873120   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:26.057999   14142 pod_ready.go:102] pod "coredns-7db6d8ff4d-7g9p9" in "kube-system" namespace has status "Ready":"False"
	I0604 21:32:26.318699   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:26.353810   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:26.372139   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:26.820527   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:26.853926   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:26.880250   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:27.317300   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:27.353702   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:27.373551   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:27.819024   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:27.865762   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:27.874457   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:28.058596   14142 pod_ready.go:102] pod "coredns-7db6d8ff4d-7g9p9" in "kube-system" namespace has status "Ready":"False"
	I0604 21:32:28.535617   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:28.535972   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:28.539916   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:28.816662   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:28.854996   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:28.882240   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:29.317365   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:29.354372   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:29.372592   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:29.821254   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:29.853704   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:29.872325   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:30.059122   14142 pod_ready.go:102] pod "coredns-7db6d8ff4d-7g9p9" in "kube-system" namespace has status "Ready":"False"
	I0604 21:32:30.317531   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:30.354179   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:30.371926   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:30.817436   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:30.854324   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:30.872299   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:31.316474   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:31.354097   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:31.373147   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:31.816660   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:31.854001   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:31.887065   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:32.319753   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:32.354667   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:32.373727   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:32.560389   14142 pod_ready.go:102] pod "coredns-7db6d8ff4d-7g9p9" in "kube-system" namespace has status "Ready":"False"
	I0604 21:32:32.819382   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:32.854602   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:32.872477   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:33.318248   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:33.355429   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:33.373706   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:33.817036   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:33.853592   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:33.872857   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:34.319712   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:34.354692   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:34.372831   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:34.817374   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:34.854640   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:34.872967   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:35.058716   14142 pod_ready.go:102] pod "coredns-7db6d8ff4d-7g9p9" in "kube-system" namespace has status "Ready":"False"
	I0604 21:32:35.316434   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:35.353768   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:35.371612   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:35.816758   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:35.854104   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:35.872441   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:36.316912   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:36.353067   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:36.371993   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:36.817048   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:36.853414   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:36.872586   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:37.059429   14142 pod_ready.go:102] pod "coredns-7db6d8ff4d-7g9p9" in "kube-system" namespace has status "Ready":"False"
	I0604 21:32:37.318275   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:37.354870   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:37.374683   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:37.817601   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:37.853942   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:37.873431   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:38.317648   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:38.353887   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:38.373283   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:38.817666   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:38.854829   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:38.873261   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:39.316949   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:39.356266   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:39.376980   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:39.562959   14142 pod_ready.go:102] pod "coredns-7db6d8ff4d-7g9p9" in "kube-system" namespace has status "Ready":"False"
	I0604 21:32:39.817752   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:39.855537   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:39.874183   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:40.317472   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:40.353622   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:40.377725   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:40.817263   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:40.853880   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:40.872984   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:41.320624   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:41.353904   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:41.376452   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:41.818508   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:41.853608   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:41.873656   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:42.058589   14142 pod_ready.go:102] pod "coredns-7db6d8ff4d-7g9p9" in "kube-system" namespace has status "Ready":"False"
	I0604 21:32:42.317341   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:42.354459   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:42.372638   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:42.816680   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:42.857272   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:42.872310   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:43.462549   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:43.465718   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:43.471190   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:43.817757   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:43.856497   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:43.873126   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:44.317930   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:44.353824   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:44.374131   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:44.558746   14142 pod_ready.go:102] pod "coredns-7db6d8ff4d-7g9p9" in "kube-system" namespace has status "Ready":"False"
	I0604 21:32:44.816462   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:44.854634   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:44.872543   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:45.316543   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:45.354097   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:45.372812   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:45.818374   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:45.860208   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:45.874147   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:46.316953   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:46.354098   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:46.372738   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:46.559721   14142 pod_ready.go:102] pod "coredns-7db6d8ff4d-7g9p9" in "kube-system" namespace has status "Ready":"False"
	I0604 21:32:46.884012   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:46.884251   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:46.885212   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:47.316843   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:47.358323   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:47.373088   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:47.821666   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:47.853820   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:47.875303   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:48.317382   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:48.353773   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:48.374575   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:48.822747   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:48.854397   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:48.872293   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:49.059945   14142 pod_ready.go:102] pod "coredns-7db6d8ff4d-7g9p9" in "kube-system" namespace has status "Ready":"False"
	I0604 21:32:49.317520   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:49.354412   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:49.372916   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:49.821919   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:49.854752   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:49.872977   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:50.317564   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:50.353513   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:50.372315   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:50.817260   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:50.854093   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:50.872730   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:51.317114   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:51.353527   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:51.374916   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:51.560546   14142 pod_ready.go:102] pod "coredns-7db6d8ff4d-7g9p9" in "kube-system" namespace has status "Ready":"False"
	I0604 21:32:51.817197   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:51.853789   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:51.872675   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:52.318431   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:52.354189   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:52.372326   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:52.880170   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:52.880341   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:52.893937   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:53.063409   14142 pod_ready.go:92] pod "coredns-7db6d8ff4d-7g9p9" in "kube-system" namespace has status "Ready":"True"
	I0604 21:32:53.063433   14142 pod_ready.go:81] duration metric: took 38.01121186s for pod "coredns-7db6d8ff4d-7g9p9" in "kube-system" namespace to be "Ready" ...
	I0604 21:32:53.063444   14142 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-c6572" in "kube-system" namespace to be "Ready" ...
	I0604 21:32:53.066087   14142 pod_ready.go:97] error getting pod "coredns-7db6d8ff4d-c6572" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-c6572" not found
	I0604 21:32:53.066109   14142 pod_ready.go:81] duration metric: took 2.657099ms for pod "coredns-7db6d8ff4d-c6572" in "kube-system" namespace to be "Ready" ...
	E0604 21:32:53.066120   14142 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-7db6d8ff4d-c6572" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-c6572" not found
	I0604 21:32:53.066129   14142 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-450158" in "kube-system" namespace to be "Ready" ...
	I0604 21:32:53.072642   14142 pod_ready.go:92] pod "etcd-addons-450158" in "kube-system" namespace has status "Ready":"True"
	I0604 21:32:53.072662   14142 pod_ready.go:81] duration metric: took 6.525914ms for pod "etcd-addons-450158" in "kube-system" namespace to be "Ready" ...
	I0604 21:32:53.072670   14142 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-450158" in "kube-system" namespace to be "Ready" ...
	I0604 21:32:53.084821   14142 pod_ready.go:92] pod "kube-apiserver-addons-450158" in "kube-system" namespace has status "Ready":"True"
	I0604 21:32:53.084847   14142 pod_ready.go:81] duration metric: took 12.170847ms for pod "kube-apiserver-addons-450158" in "kube-system" namespace to be "Ready" ...
	I0604 21:32:53.084859   14142 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-450158" in "kube-system" namespace to be "Ready" ...
	I0604 21:32:53.092906   14142 pod_ready.go:92] pod "kube-controller-manager-addons-450158" in "kube-system" namespace has status "Ready":"True"
	I0604 21:32:53.092927   14142 pod_ready.go:81] duration metric: took 8.05877ms for pod "kube-controller-manager-addons-450158" in "kube-system" namespace to be "Ready" ...
	I0604 21:32:53.092939   14142 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-999sd" in "kube-system" namespace to be "Ready" ...
	I0604 21:32:53.256401   14142 pod_ready.go:92] pod "kube-proxy-999sd" in "kube-system" namespace has status "Ready":"True"
	I0604 21:32:53.256428   14142 pod_ready.go:81] duration metric: took 163.481002ms for pod "kube-proxy-999sd" in "kube-system" namespace to be "Ready" ...
	I0604 21:32:53.256440   14142 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-450158" in "kube-system" namespace to be "Ready" ...
	I0604 21:32:53.317052   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:53.353579   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:53.372741   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:53.656345   14142 pod_ready.go:92] pod "kube-scheduler-addons-450158" in "kube-system" namespace has status "Ready":"True"
	I0604 21:32:53.656369   14142 pod_ready.go:81] duration metric: took 399.920117ms for pod "kube-scheduler-addons-450158" in "kube-system" namespace to be "Ready" ...
	I0604 21:32:53.656380   14142 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-974wq" in "kube-system" namespace to be "Ready" ...
	I0604 21:32:53.817803   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:53.853902   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:53.872348   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:54.056657   14142 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-974wq" in "kube-system" namespace has status "Ready":"True"
	I0604 21:32:54.056683   14142 pod_ready.go:81] duration metric: took 400.29422ms for pod "nvidia-device-plugin-daemonset-974wq" in "kube-system" namespace to be "Ready" ...
	I0604 21:32:54.056695   14142 pod_ready.go:38] duration metric: took 39.024977359s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0604 21:32:54.056715   14142 api_server.go:52] waiting for apiserver process to appear ...
	I0604 21:32:54.056767   14142 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0604 21:32:54.079912   14142 api_server.go:72] duration metric: took 42.804748234s to wait for apiserver process to appear ...
	I0604 21:32:54.079931   14142 api_server.go:88] waiting for apiserver healthz status ...
	I0604 21:32:54.079951   14142 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8443/healthz ...
	I0604 21:32:54.084813   14142 api_server.go:279] https://192.168.39.110:8443/healthz returned 200:
	ok
	I0604 21:32:54.085973   14142 api_server.go:141] control plane version: v1.30.1
	I0604 21:32:54.085995   14142 api_server.go:131] duration metric: took 6.058618ms to wait for apiserver health ...
	I0604 21:32:54.086003   14142 system_pods.go:43] waiting for kube-system pods to appear ...
	I0604 21:32:54.262426   14142 system_pods.go:59] 18 kube-system pods found
	I0604 21:32:54.262460   14142 system_pods.go:61] "coredns-7db6d8ff4d-7g9p9" [597e9dc9-c5eb-424e-b8f3-a5d0a81827ad] Running
	I0604 21:32:54.262473   14142 system_pods.go:61] "csi-hostpath-attacher-0" [f1b52b20-00ea-45df-8a5e-11cfcc70d8ef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0604 21:32:54.262482   14142 system_pods.go:61] "csi-hostpath-resizer-0" [dc9e4323-3f00-4efc-bb6e-994b3ebb1186] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0604 21:32:54.262495   14142 system_pods.go:61] "csi-hostpathplugin-s7bff" [3199bb2d-0b11-4d2f-b1e8-14f8d4b94525] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0604 21:32:54.262503   14142 system_pods.go:61] "etcd-addons-450158" [875b5c7a-bbc6-4f24-9171-f749047b8e67] Running
	I0604 21:32:54.262510   14142 system_pods.go:61] "kube-apiserver-addons-450158" [ed43a930-d907-4006-a964-5000073de857] Running
	I0604 21:32:54.262519   14142 system_pods.go:61] "kube-controller-manager-addons-450158" [35014339-f375-4ced-8b4a-7ef3dc8bf90d] Running
	I0604 21:32:54.262523   14142 system_pods.go:61] "kube-ingress-dns-minikube" [5f228f7b-77e2-4739-80b9-da8100dfa8b3] Running
	I0604 21:32:54.262526   14142 system_pods.go:61] "kube-proxy-999sd" [f0234aba-e1ee-4309-af83-e5672d818038] Running
	I0604 21:32:54.262531   14142 system_pods.go:61] "kube-scheduler-addons-450158" [4e0c14cf-4c9b-432f-99b3-92b85dcd9ffd] Running
	I0604 21:32:54.262534   14142 system_pods.go:61] "metrics-server-c59844bb4-7kq77" [8317cea0-7258-483a-be4d-2432c8e2cb4e] Running
	I0604 21:32:54.262537   14142 system_pods.go:61] "nvidia-device-plugin-daemonset-974wq" [98dfc179-56fb-4106-861e-9efeae4bfef4] Running
	I0604 21:32:54.262541   14142 system_pods.go:61] "registry-jv5wd" [d4cc40ea-9c44-4907-96be-00b380775884] Running
	I0604 21:32:54.262545   14142 system_pods.go:61] "registry-proxy-8w8gc" [095344ac-eb93-473e-992b-e4b40d5343a5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0604 21:32:54.262552   14142 system_pods.go:61] "snapshot-controller-745499f584-kscns" [da9461fc-e1f0-4720-959e-529b12510ffc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0604 21:32:54.262558   14142 system_pods.go:61] "snapshot-controller-745499f584-nhwhf" [80dcb717-7bcf-42c7-a6a8-5c0060da7cd1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0604 21:32:54.262562   14142 system_pods.go:61] "storage-provisioner" [d4308a6e-a61c-4435-ac2b-a297e4fb6f63] Running
	I0604 21:32:54.262568   14142 system_pods.go:61] "tiller-deploy-6677d64bcd-8wdfb" [8a627a17-28cd-4241-995a-6f4c6f44f816] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0604 21:32:54.262578   14142 system_pods.go:74] duration metric: took 176.569464ms to wait for pod list to return data ...
	I0604 21:32:54.262591   14142 default_sa.go:34] waiting for default service account to be created ...
	I0604 21:32:54.539611   14142 default_sa.go:45] found service account: "default"
	I0604 21:32:54.539639   14142 default_sa.go:55] duration metric: took 277.036375ms for default service account to be created ...
	I0604 21:32:54.539655   14142 system_pods.go:116] waiting for k8s-apps to be running ...
	I0604 21:32:54.544393   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:54.544426   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:54.547567   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:54.662479   14142 system_pods.go:86] 18 kube-system pods found
	I0604 21:32:54.662503   14142 system_pods.go:89] "coredns-7db6d8ff4d-7g9p9" [597e9dc9-c5eb-424e-b8f3-a5d0a81827ad] Running
	I0604 21:32:54.662511   14142 system_pods.go:89] "csi-hostpath-attacher-0" [f1b52b20-00ea-45df-8a5e-11cfcc70d8ef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0604 21:32:54.662519   14142 system_pods.go:89] "csi-hostpath-resizer-0" [dc9e4323-3f00-4efc-bb6e-994b3ebb1186] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0604 21:32:54.662526   14142 system_pods.go:89] "csi-hostpathplugin-s7bff" [3199bb2d-0b11-4d2f-b1e8-14f8d4b94525] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0604 21:32:54.662530   14142 system_pods.go:89] "etcd-addons-450158" [875b5c7a-bbc6-4f24-9171-f749047b8e67] Running
	I0604 21:32:54.662536   14142 system_pods.go:89] "kube-apiserver-addons-450158" [ed43a930-d907-4006-a964-5000073de857] Running
	I0604 21:32:54.662541   14142 system_pods.go:89] "kube-controller-manager-addons-450158" [35014339-f375-4ced-8b4a-7ef3dc8bf90d] Running
	I0604 21:32:54.662548   14142 system_pods.go:89] "kube-ingress-dns-minikube" [5f228f7b-77e2-4739-80b9-da8100dfa8b3] Running
	I0604 21:32:54.662551   14142 system_pods.go:89] "kube-proxy-999sd" [f0234aba-e1ee-4309-af83-e5672d818038] Running
	I0604 21:32:54.662558   14142 system_pods.go:89] "kube-scheduler-addons-450158" [4e0c14cf-4c9b-432f-99b3-92b85dcd9ffd] Running
	I0604 21:32:54.662562   14142 system_pods.go:89] "metrics-server-c59844bb4-7kq77" [8317cea0-7258-483a-be4d-2432c8e2cb4e] Running
	I0604 21:32:54.662568   14142 system_pods.go:89] "nvidia-device-plugin-daemonset-974wq" [98dfc179-56fb-4106-861e-9efeae4bfef4] Running
	I0604 21:32:54.662573   14142 system_pods.go:89] "registry-jv5wd" [d4cc40ea-9c44-4907-96be-00b380775884] Running
	I0604 21:32:54.662580   14142 system_pods.go:89] "registry-proxy-8w8gc" [095344ac-eb93-473e-992b-e4b40d5343a5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0604 21:32:54.662585   14142 system_pods.go:89] "snapshot-controller-745499f584-kscns" [da9461fc-e1f0-4720-959e-529b12510ffc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0604 21:32:54.662593   14142 system_pods.go:89] "snapshot-controller-745499f584-nhwhf" [80dcb717-7bcf-42c7-a6a8-5c0060da7cd1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0604 21:32:54.662598   14142 system_pods.go:89] "storage-provisioner" [d4308a6e-a61c-4435-ac2b-a297e4fb6f63] Running
	I0604 21:32:54.662608   14142 system_pods.go:89] "tiller-deploy-6677d64bcd-8wdfb" [8a627a17-28cd-4241-995a-6f4c6f44f816] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0604 21:32:54.662616   14142 system_pods.go:126] duration metric: took 122.955995ms to wait for k8s-apps to be running ...
	I0604 21:32:54.662623   14142 system_svc.go:44] waiting for kubelet service to be running ....
	I0604 21:32:54.662665   14142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0604 21:32:54.678886   14142 system_svc.go:56] duration metric: took 16.258347ms WaitForService to wait for kubelet
	I0604 21:32:54.678905   14142 kubeadm.go:576] duration metric: took 43.403743696s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 21:32:54.678920   14142 node_conditions.go:102] verifying NodePressure condition ...
	I0604 21:32:54.817514   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:54.856530   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:54.857668   14142 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0604 21:32:54.857697   14142 node_conditions.go:123] node cpu capacity is 2
	I0604 21:32:54.857713   14142 node_conditions.go:105] duration metric: took 178.787844ms to run NodePressure ...
	I0604 21:32:54.857727   14142 start.go:240] waiting for startup goroutines ...
	I0604 21:32:54.873653   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:55.316594   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:55.353551   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:55.372086   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:55.817545   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:55.854117   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:55.872736   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:56.317121   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:56.353956   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:56.372172   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:56.817582   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:56.854361   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:56.872071   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:57.318795   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:57.354394   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:57.373592   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:57.817598   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:57.854981   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:57.882726   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:58.317904   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:58.354665   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:58.372437   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:58.816424   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:58.854751   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:58.874152   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:59.317975   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:59.353261   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:59.371367   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:32:59.818391   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:32:59.854258   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:32:59.872952   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:00.317093   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:00.354664   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:33:00.373066   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:00.817919   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:00.855559   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:33:00.871875   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:01.317516   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:01.353871   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:33:01.371823   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:01.818110   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:01.853617   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:33:01.874041   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:02.318536   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:02.353897   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:33:02.376266   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:02.818472   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:02.853963   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:33:02.871963   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:03.317269   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:03.353365   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:33:03.371496   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:03.817256   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:03.854560   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:33:03.872076   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:04.317676   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:04.353940   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:33:04.372034   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:04.819396   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:04.854410   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:33:04.872126   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:05.317138   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:05.353307   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:33:05.373434   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:05.817527   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:05.853134   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:33:05.872650   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:06.316980   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:06.352910   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:33:06.376317   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:06.817651   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:06.853873   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:33:06.873890   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:07.317290   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:07.353798   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:33:07.374759   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:07.819383   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:07.854425   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:33:07.872411   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:08.318180   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:08.354182   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:33:08.373533   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:08.817432   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:08.853348   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:33:08.872744   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:09.316896   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:09.354157   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:33:09.377063   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:09.816909   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:09.854198   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:33:09.873624   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:10.318989   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:10.355521   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:33:10.377331   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:10.820294   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:10.855041   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:33:10.876394   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:11.317434   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:11.354488   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:33:11.372907   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:11.819424   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:11.854250   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:33:11.871436   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:12.317036   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:12.354725   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:33:12.374161   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:12.816483   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:12.854252   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:33:12.871799   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:13.317290   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:13.354288   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:33:13.372015   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:13.818477   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:13.854537   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:33:13.872693   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:14.319424   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:14.354955   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:33:14.373729   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:14.816681   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:14.854061   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:33:14.873136   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:15.317494   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:15.354316   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:33:15.374917   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:15.817432   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:15.853944   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:33:15.873146   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:16.317373   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:16.353183   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:33:16.372382   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:16.825260   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:16.853823   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:33:16.872699   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:17.317305   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:17.354388   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:33:17.373556   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:17.818004   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:17.854313   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:33:17.874781   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:18.318119   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:18.353718   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:33:18.372714   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:18.817313   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:18.856435   14142 kapi.go:107] duration metric: took 55.50906139s to wait for kubernetes.io/minikube-addons=registry ...
	I0604 21:33:18.877953   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:19.317623   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:19.372443   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:19.820137   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:19.873265   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:20.317206   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:20.373426   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:21.093267   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:21.093802   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:21.316996   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:21.373829   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:21.820562   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:21.888876   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:22.317944   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:22.374461   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:22.816410   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:22.872773   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:23.317327   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:23.373488   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:23.818114   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:23.872825   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:24.317443   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:24.371653   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:24.821669   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:24.882229   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:25.317367   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:25.373357   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:25.819624   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:25.872813   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:26.317455   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:26.373959   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:26.921936   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:26.923506   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:27.316667   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:27.372498   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:27.820557   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:27.873095   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:28.317164   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:28.372609   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:28.817891   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:28.883846   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:29.317114   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:29.373155   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:29.821528   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:29.872436   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:30.318269   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:30.372120   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:30.822159   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:30.872576   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:31.317449   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:31.373389   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:31.820472   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:31.871954   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:32.319020   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:32.373028   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:32.818078   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:32.873699   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:33.318370   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:33.373786   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:33.817503   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:33.872344   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:34.316874   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:34.372389   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:34.817380   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:34.873659   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:35.316243   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:35.372948   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:35.819555   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:35.874660   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:36.323672   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:36.374671   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:36.817699   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:36.873569   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:37.317573   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:37.373042   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:37.821291   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:37.876341   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:38.318457   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:38.372802   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:38.816975   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:38.874014   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:39.316658   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:39.372386   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:39.817239   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:39.872951   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:40.317049   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:40.373220   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:40.820725   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:40.878011   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:41.318177   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:41.374572   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:41.817351   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:41.873288   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:42.316703   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:42.372848   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:42.817466   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:42.872111   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:43.316663   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:43.373117   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:43.822969   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:43.872957   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:44.318922   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:44.372693   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:44.816068   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:44.873876   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:45.317878   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:45.373034   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:45.819290   14142 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:33:45.874399   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:46.317440   14142 kapi.go:107] duration metric: took 1m25.504819595s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0604 21:33:46.372652   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:46.877138   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:47.375569   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:47.877354   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:48.374133   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:48.873870   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:49.373611   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:49.874282   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:50.373915   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:50.879541   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:51.371856   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:51.873678   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:33:52.372225   14142 kapi.go:107] duration metric: took 1m28.505442674s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0604 21:35:10.379578   14142 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0604 21:35:10.379600   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:10.870804   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:11.370553   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:11.870365   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:12.372010   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:12.870611   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:13.371664   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:13.871740   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:14.372927   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:14.871511   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:15.370391   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:15.871720   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:16.370253   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:16.871569   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:17.370967   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:17.870556   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:18.370625   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:18.871022   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:19.370299   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:19.870877   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:20.370545   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:20.870943   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:21.371030   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:21.870954   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:22.371685   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:22.871646   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:23.371322   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:23.870534   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:24.373199   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:24.872484   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:25.370306   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:25.870821   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:26.370591   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:26.870806   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:27.372041   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:27.870481   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:28.371744   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:28.871386   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:29.370589   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:29.871316   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:30.372319   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:30.871520   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:31.370836   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:31.871150   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:32.370974   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:32.871793   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:33.373359   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:33.872198   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:34.370774   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:34.870936   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:35.371116   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:35.870878   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:36.370909   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:36.871170   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:37.371030   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:37.871221   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:38.370831   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:38.873202   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:39.370945   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:39.871058   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:40.371170   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:40.871064   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:41.372037   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:41.874738   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:42.372432   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:42.871103   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:43.371643   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:43.871717   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:44.371430   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:44.871548   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:45.370671   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:45.871507   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:46.378277   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:46.871270   14142 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:35:47.376319   14142 kapi.go:107] duration metric: took 3m22.009223305s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0604 21:35:47.377821   14142 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-450158 cluster.
	I0604 21:35:47.379274   14142 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0604 21:35:47.380790   14142 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0604 21:35:47.382232   14142 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, ingress-dns, storage-provisioner, default-storageclass, inspektor-gadget, yakd, metrics-server, helm-tiller, volcano, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0604 21:35:47.383453   14142 addons.go:510] duration metric: took 3m36.108251638s for enable addons: enabled=[nvidia-device-plugin cloud-spanner ingress-dns storage-provisioner default-storageclass inspektor-gadget yakd metrics-server helm-tiller volcano storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0604 21:35:47.383484   14142 start.go:245] waiting for cluster config update ...
	I0604 21:35:47.383500   14142 start.go:254] writing updated cluster config ...
	I0604 21:35:47.383744   14142 ssh_runner.go:195] Run: rm -f paused
	I0604 21:35:47.441885   14142 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0604 21:35:47.443771   14142 out.go:177] * Done! kubectl is now configured to use "addons-450158" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	191e1f23d3280       beae173ccac6a       4 seconds ago        Exited              registry-test                            0                   338e8758c296c       registry-test
	945a59a3f329c       98f6c3b32d565       7 seconds ago        Exited              helm-test                                0                   80c80c8bdb18a       helm-test
	4a6af9ae2d590       2cfebb9f82f21       11 seconds ago       Running             headlamp                                 0                   c97e9d7c2ae3e       headlamp-7fc69f7444-rtj6l
	350ae3ffc15a9       db2fc13d44d50       20 seconds ago       Running             gcp-auth                                 0                   74202d3238947       gcp-auth-5db96cd9b4-b79kl
	675715cbf41c2       684c5ea3b61b2       38 seconds ago       Exited              patch                                    0                   97be21d45c68f       gcp-auth-certs-patch-rc24p
	f7875cb65817d       684c5ea3b61b2       38 seconds ago       Exited              create                                   0                   63aa358902fff       gcp-auth-certs-create-ls8xh
	564b261b1f465       9c77da827b938       53 seconds ago       Exited              gadget                                   4                   9f2c7f72e77bb       gadget-jhs4g
	8e602f1cd6a39       ae3a194a79dc4       About a minute ago   Running             volcano-scheduler                        1                   87963325b0fb2       volcano-scheduler-765f888978-2mtmf
	01e6c319f9a82       fd19c461b125e       About a minute ago   Running             admission                                0                   93006b49563d3       volcano-admission-7b497cf95b-sfsq5
	bc5d8ac86af66       738351fd438f0       2 minutes ago        Running             csi-snapshotter                          0                   5c4adb74f510e       csi-hostpathplugin-s7bff
	7340257423312       931dbfd16f87c       2 minutes ago        Running             csi-provisioner                          0                   5c4adb74f510e       csi-hostpathplugin-s7bff
	e8c8f80b9c02e       e899260153aed       2 minutes ago        Running             liveness-probe                           0                   5c4adb74f510e       csi-hostpathplugin-s7bff
	573e62f21fdc2       e255e073c508c       2 minutes ago        Running             hostpath                                 0                   5c4adb74f510e       csi-hostpathplugin-s7bff
	eb075beb1711c       88ef14a257f42       2 minutes ago        Running             node-driver-registrar                    0                   5c4adb74f510e       csi-hostpathplugin-s7bff
	c5622bd7d3947       ee54966f3891d       2 minutes ago        Running             controller                               0                   2c6c92421f71f       ingress-nginx-controller-768f948f8f-dwmt6
	193dae34b107f       59cbb42146a37       2 minutes ago        Running             csi-attacher                             0                   afd2abfceffa9       csi-hostpath-attacher-0
	aa4c8574810c5       a1ed5895ba635       2 minutes ago        Running             csi-external-health-monitor-controller   0                   5c4adb74f510e       csi-hostpathplugin-s7bff
	e6c38d1469491       19a639eda60f0       2 minutes ago        Running             csi-resizer                              0                   2d4be3e1f648e       csi-hostpath-resizer-0
	cf02cc59eabdc       ae3a194a79dc4       2 minutes ago        Exited              volcano-scheduler                        0                   87963325b0fb2       volcano-scheduler-765f888978-2mtmf
	b5c01c9c6f35a       641c85390e179       2 minutes ago        Running             volcano-controller                       0                   a7728a870d005       volcano-controller-86c5446455-99dhh
	25fd261311cc3       fd19c461b125e       2 minutes ago        Exited              main                                     0                   ac0523978bf96       volcano-admission-init-hfn6l
	b10b4355dabad       684c5ea3b61b2       2 minutes ago        Exited              patch                                    0                   1a8e6733c0577       ingress-nginx-admission-patch-2ldqt
	b3414e7e2d419       684c5ea3b61b2       2 minutes ago        Exited              create                                   0                   59f4c1919666c       ingress-nginx-admission-create-5zgpz
	33f488f4f8671       aa61ee9c70bc4       2 minutes ago        Running             volume-snapshot-controller               0                   1d4593d24f008       snapshot-controller-745499f584-nhwhf
	046d05205377a       aa61ee9c70bc4       2 minutes ago        Running             volume-snapshot-controller               0                   0a8d5d342495c       snapshot-controller-745499f584-kscns
	c85c16707dd30       e16d1e3a10667       3 minutes ago        Running             local-path-provisioner                   0                   b77285d3b11c8       local-path-provisioner-8d985888d-75j9n
	d99424600e64c       31de47c733c91       3 minutes ago        Running             yakd                                     0                   913b1f3bb1845       yakd-dashboard-5ddbf7d777-scdnc
	744ab2375c1b0       d6b2c32a0f145       3 minutes ago        Exited              registry                                 0                   6a1c314253849       registry-jv5wd
	6ad55dce1e244       1499ed4fbd0aa       3 minutes ago        Running             minikube-ingress-dns                     0                   96d4de4102d28       kube-ingress-dns-minikube
	688a939788458       0ff6c6518681d       3 minutes ago        Running             cloud-spanner-emulator                   0                   cf87877bfc9d8       cloud-spanner-emulator-6fcd4f6f98-79sdj
	675b0f9fd8cc5       6e38f40d628db       3 minutes ago        Running             storage-provisioner                      0                   49c1fcc2a1580       storage-provisioner
	1654224f69f20       747097150317f       3 minutes ago        Running             kube-proxy                               0                   1254819afce3f       kube-proxy-999sd
	77f3d9cef51af       cbb01a7bd410d       3 minutes ago        Running             coredns                                  0                   38bda06400a0d       coredns-7db6d8ff4d-7g9p9
	694ad31b48ed2       3861cfcd7c04c       4 minutes ago        Running             etcd                                     0                   f1e815b3da190       etcd-addons-450158
	47f37dd6b7762       a52dc94f0a912       4 minutes ago        Running             kube-scheduler                           0                   a54c48a878b52       kube-scheduler-addons-450158
	6f4b9316974b5       25a1387cdab82       4 minutes ago        Running             kube-controller-manager                  0                   016cb29db9e44       kube-controller-manager-addons-450158
	41c73eab8ef50       91be940803172       4 minutes ago        Running             kube-apiserver                           0                   29528436bfd48       kube-apiserver-addons-450158
	
	
	==> containerd <==
	Jun 04 21:36:06 addons-450158 containerd[649]: time="2024-06-04T21:36:06.186205304Z" level=info msg="Stop container \"737d91e0b7c78d97e5bd5411afff686bc870d35f328c749b55b36248f4422e26\" with signal terminated"
	Jun 04 21:36:06 addons-450158 containerd[649]: time="2024-06-04T21:36:06.287287171Z" level=info msg="shim disconnected" id=737d91e0b7c78d97e5bd5411afff686bc870d35f328c749b55b36248f4422e26 namespace=k8s.io
	Jun 04 21:36:06 addons-450158 containerd[649]: time="2024-06-04T21:36:06.287528440Z" level=warning msg="cleaning up after shim disconnected" id=737d91e0b7c78d97e5bd5411afff686bc870d35f328c749b55b36248f4422e26 namespace=k8s.io
	Jun 04 21:36:06 addons-450158 containerd[649]: time="2024-06-04T21:36:06.287707890Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jun 04 21:36:06 addons-450158 containerd[649]: time="2024-06-04T21:36:06.340145512Z" level=info msg="StopContainer for \"737d91e0b7c78d97e5bd5411afff686bc870d35f328c749b55b36248f4422e26\" returns successfully"
	Jun 04 21:36:06 addons-450158 containerd[649]: time="2024-06-04T21:36:06.341720497Z" level=info msg="StopPodSandbox for \"532c8fa9697927faf9c69d078d5d44c9a8a89b96279df0828ee18c2a859feef4\""
	Jun 04 21:36:06 addons-450158 containerd[649]: time="2024-06-04T21:36:06.342002885Z" level=info msg="Container to stop \"737d91e0b7c78d97e5bd5411afff686bc870d35f328c749b55b36248f4422e26\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jun 04 21:36:06 addons-450158 containerd[649]: time="2024-06-04T21:36:06.446870586Z" level=info msg="shim disconnected" id=532c8fa9697927faf9c69d078d5d44c9a8a89b96279df0828ee18c2a859feef4 namespace=k8s.io
	Jun 04 21:36:06 addons-450158 containerd[649]: time="2024-06-04T21:36:06.449423166Z" level=warning msg="cleaning up after shim disconnected" id=532c8fa9697927faf9c69d078d5d44c9a8a89b96279df0828ee18c2a859feef4 namespace=k8s.io
	Jun 04 21:36:06 addons-450158 containerd[649]: time="2024-06-04T21:36:06.450764083Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jun 04 21:36:06 addons-450158 containerd[649]: time="2024-06-04T21:36:06.458187921Z" level=info msg="RemoveContainer for \"001789f9cb17c89170ca349f7304c7ae9901340d0c3d8e685213122306eca809\""
	Jun 04 21:36:06 addons-450158 containerd[649]: time="2024-06-04T21:36:06.479326450Z" level=info msg="RemoveContainer for \"001789f9cb17c89170ca349f7304c7ae9901340d0c3d8e685213122306eca809\" returns successfully"
	Jun 04 21:36:06 addons-450158 containerd[649]: time="2024-06-04T21:36:06.483404236Z" level=info msg="RemoveContainer for \"6b8c31db4f040087152b5e0cd04a607579844935a86021f4056c0c1f8267e920\""
	Jun 04 21:36:06 addons-450158 containerd[649]: time="2024-06-04T21:36:06.509205174Z" level=warning msg="cleanup warnings time=\"2024-06-04T21:36:06Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
	Jun 04 21:36:06 addons-450158 containerd[649]: time="2024-06-04T21:36:06.513283927Z" level=info msg="RemoveContainer for \"6b8c31db4f040087152b5e0cd04a607579844935a86021f4056c0c1f8267e920\" returns successfully"
	Jun 04 21:36:06 addons-450158 containerd[649]: time="2024-06-04T21:36:06.659827256Z" level=info msg="TearDown network for sandbox \"532c8fa9697927faf9c69d078d5d44c9a8a89b96279df0828ee18c2a859feef4\" successfully"
	Jun 04 21:36:06 addons-450158 containerd[649]: time="2024-06-04T21:36:06.660064043Z" level=info msg="StopPodSandbox for \"532c8fa9697927faf9c69d078d5d44c9a8a89b96279df0828ee18c2a859feef4\" returns successfully"
	Jun 04 21:36:06 addons-450158 containerd[649]: time="2024-06-04T21:36:06.941866415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:task-pv-pod,Uid:494be2ac-658b-4b7a-b0d5-e1d7071d7f8c,Namespace:default,Attempt:0,}"
	Jun 04 21:36:07 addons-450158 containerd[649]: time="2024-06-04T21:36:07.125055817Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 04 21:36:07 addons-450158 containerd[649]: time="2024-06-04T21:36:07.125969755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 04 21:36:07 addons-450158 containerd[649]: time="2024-06-04T21:36:07.126333023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 21:36:07 addons-450158 containerd[649]: time="2024-06-04T21:36:07.127908687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 21:36:07 addons-450158 containerd[649]: time="2024-06-04T21:36:07.257748361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:task-pv-pod,Uid:494be2ac-658b-4b7a-b0d5-e1d7071d7f8c,Namespace:default,Attempt:0,} returns sandbox id \"0abd3afbe6f869de852c09fab9e564474463859d39497ccbb9f205afe5b2645a\""
	Jun 04 21:36:07 addons-450158 containerd[649]: time="2024-06-04T21:36:07.475233006Z" level=info msg="RemoveContainer for \"737d91e0b7c78d97e5bd5411afff686bc870d35f328c749b55b36248f4422e26\""
	Jun 04 21:36:07 addons-450158 containerd[649]: time="2024-06-04T21:36:07.492822186Z" level=info msg="RemoveContainer for \"737d91e0b7c78d97e5bd5411afff686bc870d35f328c749b55b36248f4422e26\" returns successfully"
	
	
	==> coredns [77f3d9cef51af3c52d1ff40ff587f0229784b02be1376925816b38c9d8f3edbe] <==
	[INFO] 10.244.0.9:40247 - 39048 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00078757s
	[INFO] 10.244.0.9:46457 - 43292 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000192107s
	[INFO] 10.244.0.9:46457 - 12318 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000086639s
	[INFO] 10.244.0.9:45837 - 63865 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000101483s
	[INFO] 10.244.0.9:45837 - 39547 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000086529s
	[INFO] 10.244.0.9:40859 - 18903 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000113255s
	[INFO] 10.244.0.9:40859 - 2009 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000088964s
	[INFO] 10.244.0.9:49947 - 64021 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000049181s
	[INFO] 10.244.0.9:49947 - 52246 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000043054s
	[INFO] 10.244.0.9:36135 - 16652 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000030842s
	[INFO] 10.244.0.9:36135 - 37385 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000029487s
	[INFO] 10.244.0.9:41175 - 64526 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000042562s
	[INFO] 10.244.0.9:41175 - 61187 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000032336s
	[INFO] 10.244.0.9:43436 - 40997 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000029854s
	[INFO] 10.244.0.9:43436 - 17444 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000038657s
	[INFO] 10.244.0.26:43019 - 4636 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000564937s
	[INFO] 10.244.0.26:54076 - 20143 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000216677s
	[INFO] 10.244.0.26:46611 - 7968 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000119031s
	[INFO] 10.244.0.26:44506 - 36354 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000183801s
	[INFO] 10.244.0.26:35895 - 43444 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000078718s
	[INFO] 10.244.0.26:43675 - 30703 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00011976s
	[INFO] 10.244.0.26:38915 - 43411 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 230 0.000394862s
	[INFO] 10.244.0.26:37117 - 11520 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001013775s
	[INFO] 10.244.0.29:46019 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000348849s
	[INFO] 10.244.0.29:45824 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000182444s
	
	
	==> describe nodes <==
	Name:               addons-450158
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-450158
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=901ac483c3e1097c63cda7493d918b612a8127f5
	                    minikube.k8s.io/name=addons-450158
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_04T21_31_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-450158
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-450158"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 04 Jun 2024 21:31:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-450158
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 04 Jun 2024 21:36:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 04 Jun 2024 21:36:02 +0000   Tue, 04 Jun 2024 21:31:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 04 Jun 2024 21:36:02 +0000   Tue, 04 Jun 2024 21:31:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 04 Jun 2024 21:36:02 +0000   Tue, 04 Jun 2024 21:31:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 04 Jun 2024 21:36:02 +0000   Tue, 04 Jun 2024 21:31:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.110
	  Hostname:    addons-450158
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 425bc34b841b442885020a34dde37ba5
	  System UUID:                425bc34b-841b-4428-8502-0a34dde37ba5
	  Boot ID:                    f35864a7-e29c-47eb-a563-28afbd418baa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.17
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (25 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6fcd4f6f98-79sdj      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  default                     task-pv-pod                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	  gadget                      gadget-jhs4g                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	  gcp-auth                    gcp-auth-5db96cd9b4-b79kl                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  headlamp                    headlamp-7fc69f7444-rtj6l                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	  ingress-nginx               ingress-nginx-controller-768f948f8f-dwmt6    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         3m48s
	  kube-system                 coredns-7db6d8ff4d-7g9p9                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m56s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m45s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m45s
	  kube-system                 csi-hostpathplugin-s7bff                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m45s
	  kube-system                 etcd-addons-450158                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m11s
	  kube-system                 kube-apiserver-addons-450158                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m11s
	  kube-system                 kube-controller-manager-addons-450158        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m11s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 kube-proxy-999sd                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	  kube-system                 kube-scheduler-addons-450158                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m11s
	  kube-system                 snapshot-controller-745499f584-kscns         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 snapshot-controller-745499f584-nhwhf         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	  local-path-storage          local-path-provisioner-8d985888d-75j9n       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	  volcano-system              volcano-admission-7b497cf95b-sfsq5           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  volcano-system              volcano-controller-86c5446455-99dhh          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m46s
	  volcano-system              volcano-scheduler-765f888978-2mtmf           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m46s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-scdnc              0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     3m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             388Mi (10%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m53s  kube-proxy       
	  Normal  Starting                 4m11s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m11s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m11s  kubelet          Node addons-450158 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m11s  kubelet          Node addons-450158 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m11s  kubelet          Node addons-450158 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m10s  kubelet          Node addons-450158 status is now: NodeReady
	  Normal  RegisteredNode           3m58s  node-controller  Node addons-450158 event: Registered Node addons-450158 in Controller
	
	
	==> dmesg <==
	[  +5.105416] systemd-fstab-generator[1224]: Ignoring "noauto" option for root device
	[  +0.069568] kauditd_printk_skb: 30 callbacks suppressed
	[Jun 4 21:32] systemd-fstab-generator[1415]: Ignoring "noauto" option for root device
	[  +0.182702] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.480019] kauditd_printk_skb: 117 callbacks suppressed
	[  +5.052901] kauditd_printk_skb: 138 callbacks suppressed
	[  +7.429740] kauditd_printk_skb: 90 callbacks suppressed
	[ +19.863553] kauditd_printk_skb: 4 callbacks suppressed
	[Jun 4 21:33] kauditd_printk_skb: 2 callbacks suppressed
	[ +17.553493] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.536088] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.025697] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.520814] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.912664] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.309502] kauditd_printk_skb: 24 callbacks suppressed
	[  +6.192881] kauditd_printk_skb: 21 callbacks suppressed
	[Jun 4 21:34] kauditd_printk_skb: 24 callbacks suppressed
	[ +31.206569] kauditd_printk_skb: 24 callbacks suppressed
	[Jun 4 21:35] kauditd_printk_skb: 19 callbacks suppressed
	[ +14.838777] kauditd_printk_skb: 24 callbacks suppressed
	[ +13.761639] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.871931] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.827326] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.092328] kauditd_printk_skb: 14 callbacks suppressed
	[Jun 4 21:36] kauditd_printk_skb: 55 callbacks suppressed
	
	
	==> etcd [694ad31b48ed2b8edeef9084e016064a7045fe1456adfef48c1ef7ce2990d2f3] <==
	{"level":"info","ts":"2024-06-04T21:33:22.135095Z","caller":"traceutil/trace.go:171","msg":"trace[383722470] range","detail":"{range_begin:/registry/secrets/ingress-nginx/ingress-nginx-admission; range_end:; response_count:1; response_revision:1143; }","duration":"217.521696ms","start":"2024-06-04T21:33:21.917564Z","end":"2024-06-04T21:33:22.135085Z","steps":["trace[383722470] 'agreement among raft nodes before linearized reading'  (duration: 217.370734ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-04T21:33:22.13533Z","caller":"traceutil/trace.go:171","msg":"trace[1103531634] transaction","detail":"{read_only:false; response_revision:1143; number_of_response:1; }","duration":"273.006786ms","start":"2024-06-04T21:33:21.862309Z","end":"2024-06-04T21:33:22.135315Z","steps":["trace[1103531634] 'process raft request'  (duration: 215.089223ms)","trace[1103531634] 'compare'  (duration: 56.609281ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-04T21:33:26.891712Z","caller":"traceutil/trace.go:171","msg":"trace[953875917] linearizableReadLoop","detail":"{readStateIndex:1198; appliedIndex:1197; }","duration":"101.517224ms","start":"2024-06-04T21:33:26.790163Z","end":"2024-06-04T21:33:26.89168Z","steps":["trace[953875917] 'read index received'  (duration: 100.748647ms)","trace[953875917] 'applied index is now lower than readState.Index'  (duration: 767.801µs)"],"step_count":2}
	{"level":"info","ts":"2024-06-04T21:33:26.893327Z","caller":"traceutil/trace.go:171","msg":"trace[127024392] transaction","detail":"{read_only:false; response_revision:1168; number_of_response:1; }","duration":"239.11108ms","start":"2024-06-04T21:33:26.654196Z","end":"2024-06-04T21:33:26.893307Z","steps":["trace[127024392] 'process raft request'  (duration: 236.764994ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-04T21:33:26.895256Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.638309ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14397"}
	{"level":"info","ts":"2024-06-04T21:33:26.895283Z","caller":"traceutil/trace.go:171","msg":"trace[898975537] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1168; }","duration":"101.680503ms","start":"2024-06-04T21:33:26.79359Z","end":"2024-06-04T21:33:26.89527Z","steps":["trace[898975537] 'agreement among raft nodes before linearized reading'  (duration: 101.597197ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-04T21:33:26.895014Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.811954ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.110\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-06-04T21:33:26.895375Z","caller":"traceutil/trace.go:171","msg":"trace[1254662818] range","detail":"{range_begin:/registry/masterleases/192.168.39.110; range_end:; response_count:1; response_revision:1168; }","duration":"105.23297ms","start":"2024-06-04T21:33:26.790138Z","end":"2024-06-04T21:33:26.895371Z","steps":["trace[1254662818] 'agreement among raft nodes before linearized reading'  (duration: 103.460385ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-04T21:33:37.754572Z","caller":"traceutil/trace.go:171","msg":"trace[366941344] linearizableReadLoop","detail":"{readStateIndex:1244; appliedIndex:1243; }","duration":"169.82504ms","start":"2024-06-04T21:33:37.584732Z","end":"2024-06-04T21:33:37.754557Z","steps":["trace[366941344] 'read index received'  (duration: 169.405612ms)","trace[366941344] 'applied index is now lower than readState.Index'  (duration: 418.804µs)"],"step_count":2}
	{"level":"info","ts":"2024-06-04T21:33:37.754965Z","caller":"traceutil/trace.go:171","msg":"trace[1587078201] transaction","detail":"{read_only:false; response_revision:1211; number_of_response:1; }","duration":"309.606181ms","start":"2024-06-04T21:33:37.445157Z","end":"2024-06-04T21:33:37.754763Z","steps":["trace[1587078201] 'process raft request'  (duration: 309.076698ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-04T21:33:37.755143Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.391905ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-06-04T21:33:37.755902Z","caller":"traceutil/trace.go:171","msg":"trace[1977121783] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1211; }","duration":"171.183905ms","start":"2024-06-04T21:33:37.584708Z","end":"2024-06-04T21:33:37.755891Z","steps":["trace[1977121783] 'agreement among raft nodes before linearized reading'  (duration: 170.32274ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-04T21:33:37.756158Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-04T21:33:37.445143Z","time spent":"310.636829ms","remote":"127.0.0.1:55914","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":678,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-dnen7dzi2no3cgesz3lec5j7em\" mod_revision:1170 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-dnen7dzi2no3cgesz3lec5j7em\" value_size:605 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-dnen7dzi2no3cgesz3lec5j7em\" > >"}
	{"level":"warn","ts":"2024-06-04T21:33:57.156714Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-04T21:33:56.797477Z","time spent":"359.233752ms","remote":"127.0.0.1:55678","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2024-06-04T21:33:57.158588Z","caller":"traceutil/trace.go:171","msg":"trace[1473179471] linearizableReadLoop","detail":"{readStateIndex:1331; appliedIndex:1329; }","duration":"312.495917ms","start":"2024-06-04T21:33:56.846082Z","end":"2024-06-04T21:33:57.158578Z","steps":["trace[1473179471] 'read index received'  (duration: 310.394677ms)","trace[1473179471] 'applied index is now lower than readState.Index'  (duration: 2.100798ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-04T21:33:57.161026Z","caller":"traceutil/trace.go:171","msg":"trace[1771788502] transaction","detail":"{read_only:false; response_revision:1294; number_of_response:1; }","duration":"363.341294ms","start":"2024-06-04T21:33:56.797569Z","end":"2024-06-04T21:33:57.160911Z","steps":["trace[1771788502] 'process raft request'  (duration: 360.818985ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-04T21:33:57.161153Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-04T21:33:56.797558Z","time spent":"363.515253ms","remote":"127.0.0.1:55742","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":778,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/registry-jv5wd.17d5ea9015674720\" mod_revision:1042 > success:<request_put:<key:\"/registry/events/kube-system/registry-jv5wd.17d5ea9015674720\" value_size:700 lease:2709073390285171280 >> failure:<request_range:<key:\"/registry/events/kube-system/registry-jv5wd.17d5ea9015674720\" > >"}
	{"level":"warn","ts":"2024-06-04T21:33:57.16125Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"315.171631ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-04T21:33:57.161353Z","caller":"traceutil/trace.go:171","msg":"trace[1274019366] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:0; response_revision:1294; }","duration":"315.300373ms","start":"2024-06-04T21:33:56.846036Z","end":"2024-06-04T21:33:57.161336Z","steps":["trace[1274019366] 'agreement among raft nodes before linearized reading'  (duration: 313.257898ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-04T21:33:57.161435Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-04T21:33:56.846023Z","time spent":"315.404487ms","remote":"127.0.0.1:55850","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":0,"response size":29,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-06-04T21:33:57.165928Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"307.281533ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-06-04T21:33:57.167357Z","caller":"traceutil/trace.go:171","msg":"trace[1189491707] range","detail":"{range_begin:/registry/controllerrevisions/; range_end:/registry/controllerrevisions0; response_count:0; response_revision:1294; }","duration":"308.7407ms","start":"2024-06-04T21:33:56.858603Z","end":"2024-06-04T21:33:57.167344Z","steps":["trace[1189491707] 'agreement among raft nodes before linearized reading'  (duration: 300.803863ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-04T21:33:57.16754Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-04T21:33:56.858591Z","time spent":"308.894406ms","remote":"127.0.0.1:56182","response type":"/etcdserverpb.KV/Range","request count":0,"request size":66,"response count":7,"response size":31,"request content":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" count_only:true "}
	{"level":"warn","ts":"2024-06-04T21:33:57.174562Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.90231ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses/\" range_end:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-06-04T21:33:57.175153Z","caller":"traceutil/trace.go:171","msg":"trace[123741914] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotclasses/; range_end:/registry/snapshot.storage.k8s.io/volumesnapshotclasses0; response_count:0; response_revision:1294; }","duration":"144.407199ms","start":"2024-06-04T21:33:57.030626Z","end":"2024-06-04T21:33:57.175034Z","steps":["trace[123741914] 'agreement among raft nodes before linearized reading'  (duration: 128.79757ms)"],"step_count":1}
	
	
	==> gcp-auth [350ae3ffc15a9107fa3d2191062e2f7013c1ad2eae1da8f8206e91b4c7ba585f] <==
	2024/06/04 21:35:47 GCP Auth Webhook started!
	2024/06/04 21:35:48 Ready to marshal response ...
	2024/06/04 21:35:48 Ready to write response ...
	2024/06/04 21:35:48 Ready to marshal response ...
	2024/06/04 21:35:48 Ready to write response ...
	2024/06/04 21:35:48 Ready to marshal response ...
	2024/06/04 21:35:48 Ready to write response ...
	2024/06/04 21:35:52 Ready to marshal response ...
	2024/06/04 21:35:52 Ready to write response ...
	2024/06/04 21:35:58 Ready to marshal response ...
	2024/06/04 21:35:58 Ready to write response ...
	2024/06/04 21:36:03 Ready to marshal response ...
	2024/06/04 21:36:03 Ready to write response ...
	2024/06/04 21:36:03 Ready to marshal response ...
	2024/06/04 21:36:03 Ready to write response ...
	2024/06/04 21:36:06 Ready to marshal response ...
	2024/06/04 21:36:06 Ready to write response ...
	
	
	==> kernel <==
	 21:36:08 up 4 min,  0 users,  load average: 4.52, 1.93, 0.82
	Linux addons-450158 5.10.207 #1 SMP Tue Jun 4 20:09:42 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [41c73eab8ef50ae2113dadb3fafdeb790b711e249d91585574b3e8a3c239e591] <==
	W0604 21:34:28.789305       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.183.36:443: connect: connection refused
	W0604 21:34:29.881345       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.183.36:443: connect: connection refused
	W0604 21:34:30.894018       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.183.36:443: connect: connection refused
	W0604 21:34:31.995334       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.183.36:443: connect: connection refused
	W0604 21:34:33.005960       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.183.36:443: connect: connection refused
	W0604 21:34:34.031827       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.183.36:443: connect: connection refused
	W0604 21:34:35.103903       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.183.36:443: connect: connection refused
	W0604 21:34:36.131189       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.183.36:443: connect: connection refused
	W0604 21:34:37.264405       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.183.36:443: connect: connection refused
	W0604 21:34:38.328213       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.183.36:443: connect: connection refused
	W0604 21:34:39.425078       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.183.36:443: connect: connection refused
	W0604 21:34:40.494944       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.183.36:443: connect: connection refused
	W0604 21:34:41.542275       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.183.36:443: connect: connection refused
	W0604 21:34:42.589648       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.183.36:443: connect: connection refused
	W0604 21:34:43.663484       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.183.36:443: connect: connection refused
	W0604 21:35:10.289460       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.140.134:443: connect: connection refused
	E0604 21:35:10.289783       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.140.134:443: connect: connection refused
	W0604 21:35:28.338395       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.140.134:443: connect: connection refused
	E0604 21:35:28.338472       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.140.134:443: connect: connection refused
	W0604 21:35:28.403715       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.140.134:443: connect: connection refused
	E0604 21:35:28.403930       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.140.134:443: connect: connection refused
	I0604 21:35:48.314655       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.222.50"}
	E0604 21:36:00.425822       1 conn.go:339] Error on socket receive: read tcp 192.168.39.110:8443->192.168.39.1:33870: use of closed network connection
	I0604 21:36:03.674401       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0604 21:36:03.889265       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.111.5"}
	
	
	==> kube-controller-manager [6f4b9316974b5472150a11a23fbd04a9b695329396ae1ea7cdac630574a1a628] <==
	I0604 21:35:31.282826       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0604 21:35:31.402609       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0604 21:35:31.404479       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0604 21:35:31.412384       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0604 21:35:31.418237       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0604 21:35:31.431362       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0604 21:35:31.446004       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0604 21:35:31.454819       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0604 21:35:32.291233       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0604 21:35:47.362870       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="19.583985ms"
	I0604 21:35:47.364192       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="598.215µs"
	I0604 21:35:48.419780       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7fc69f7444" duration="69.820693ms"
	I0604 21:35:48.438025       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7fc69f7444" duration="17.940139ms"
	I0604 21:35:48.453845       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7fc69f7444" duration="15.768477ms"
	I0604 21:35:48.453939       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7fc69f7444" duration="38.254µs"
	I0604 21:35:57.381262       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7fc69f7444" duration="45.839µs"
	I0604 21:35:57.415934       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7fc69f7444" duration="14.573201ms"
	I0604 21:35:57.416407       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7fc69f7444" duration="92.086µs"
	I0604 21:36:00.824865       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="4.914µs"
	I0604 21:36:01.020403       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0604 21:36:01.025290       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0604 21:36:01.063226       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0604 21:36:01.065169       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0604 21:36:05.118317       1 replica_set.go:676] "Finished syncing" logger="replicationcontroller-controller" kind="ReplicationController" key="kube-system/registry" duration="7.559µs"
	I0604 21:36:06.167325       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-6677d64bcd" duration="4.468µs"
	
	
	==> kube-proxy [1654224f69f200cfebe55ef5f70eda2fca647735121f3a0fa8c4505115ea7ca9] <==
	I0604 21:32:13.915605       1 server_linux.go:69] "Using iptables proxy"
	I0604 21:32:13.965982       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.110"]
	I0604 21:32:14.148406       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0604 21:32:14.148436       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0604 21:32:14.148452       1 server_linux.go:165] "Using iptables Proxier"
	I0604 21:32:14.162820       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0604 21:32:14.163003       1 server.go:872] "Version info" version="v1.30.1"
	I0604 21:32:14.163016       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0604 21:32:14.164454       1 config.go:192] "Starting service config controller"
	I0604 21:32:14.164463       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0604 21:32:14.164628       1 config.go:101] "Starting endpoint slice config controller"
	I0604 21:32:14.164634       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0604 21:32:14.165131       1 config.go:319] "Starting node config controller"
	I0604 21:32:14.165140       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0604 21:32:14.265876       1 shared_informer.go:320] Caches are synced for node config
	I0604 21:32:14.265974       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0604 21:32:14.265973       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [47f37dd6b77627e64e0b60c1e62b92e3313ecdcdf0e1f82e3907a523af9efda3] <==
	W0604 21:31:55.194011       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0604 21:31:55.194119       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0604 21:31:55.194173       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0604 21:31:55.194198       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0604 21:31:55.194044       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0604 21:31:55.194287       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0604 21:31:56.055559       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0604 21:31:56.055604       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0604 21:31:56.090597       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0604 21:31:56.090641       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0604 21:31:56.111677       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0604 21:31:56.111721       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0604 21:31:56.184590       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0604 21:31:56.184711       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0604 21:31:56.201342       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0604 21:31:56.201790       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0604 21:31:56.275416       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0604 21:31:56.275551       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0604 21:31:56.451818       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0604 21:31:56.451869       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0604 21:31:56.452930       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0604 21:31:56.452999       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0604 21:31:56.535617       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0604 21:31:56.535838       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0604 21:31:58.385194       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 04 21:36:05 addons-450158 kubelet[1231]: I0604 21:36:05.805244    1231 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-n8xrz\" (UniqueName: \"kubernetes.io/projected/23b5dc1f-428a-49e7-97d8-69cc8992b438-kube-api-access-n8xrz\") on node \"addons-450158\" DevicePath \"\""
	Jun 04 21:36:05 addons-450158 kubelet[1231]: I0604 21:36:05.807020    1231 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/095344ac-eb93-473e-992b-e4b40d5343a5-kube-api-access-tpzqv" (OuterVolumeSpecName: "kube-api-access-tpzqv") pod "095344ac-eb93-473e-992b-e4b40d5343a5" (UID: "095344ac-eb93-473e-992b-e4b40d5343a5"). InnerVolumeSpecName "kube-api-access-tpzqv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 04 21:36:05 addons-450158 kubelet[1231]: I0604 21:36:05.905926    1231 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-tpzqv\" (UniqueName: \"kubernetes.io/projected/095344ac-eb93-473e-992b-e4b40d5343a5-kube-api-access-tpzqv\") on node \"addons-450158\" DevicePath \"\""
	Jun 04 21:36:06 addons-450158 kubelet[1231]: I0604 21:36:06.452697    1231 scope.go:117] "RemoveContainer" containerID="001789f9cb17c89170ca349f7304c7ae9901340d0c3d8e685213122306eca809"
	Jun 04 21:36:06 addons-450158 kubelet[1231]: I0604 21:36:06.479766    1231 scope.go:117] "RemoveContainer" containerID="6b8c31db4f040087152b5e0cd04a607579844935a86021f4056c0c1f8267e920"
	Jun 04 21:36:06 addons-450158 kubelet[1231]: I0604 21:36:06.625312    1231 topology_manager.go:215] "Topology Admit Handler" podUID="494be2ac-658b-4b7a-b0d5-e1d7071d7f8c" podNamespace="default" podName="task-pv-pod"
	Jun 04 21:36:06 addons-450158 kubelet[1231]: E0604 21:36:06.625580    1231 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6ee43675-5282-4a27-ac7b-1e311255f080" containerName="registry-test"
	Jun 04 21:36:06 addons-450158 kubelet[1231]: E0604 21:36:06.625591    1231 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="095344ac-eb93-473e-992b-e4b40d5343a5" containerName="registry-proxy"
	Jun 04 21:36:06 addons-450158 kubelet[1231]: E0604 21:36:06.625656    1231 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="23b5dc1f-428a-49e7-97d8-69cc8992b438" containerName="helm-test"
	Jun 04 21:36:06 addons-450158 kubelet[1231]: E0604 21:36:06.625666    1231 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d4cc40ea-9c44-4907-96be-00b380775884" containerName="registry"
	Jun 04 21:36:06 addons-450158 kubelet[1231]: I0604 21:36:06.625759    1231 memory_manager.go:354] "RemoveStaleState removing state" podUID="23b5dc1f-428a-49e7-97d8-69cc8992b438" containerName="helm-test"
	Jun 04 21:36:06 addons-450158 kubelet[1231]: I0604 21:36:06.625786    1231 memory_manager.go:354] "RemoveStaleState removing state" podUID="095344ac-eb93-473e-992b-e4b40d5343a5" containerName="registry-proxy"
	Jun 04 21:36:06 addons-450158 kubelet[1231]: I0604 21:36:06.625793    1231 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ee43675-5282-4a27-ac7b-1e311255f080" containerName="registry-test"
	Jun 04 21:36:06 addons-450158 kubelet[1231]: I0604 21:36:06.625799    1231 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4cc40ea-9c44-4907-96be-00b380775884" containerName="registry"
	Jun 04 21:36:06 addons-450158 kubelet[1231]: I0604 21:36:06.713736    1231 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwqfg\" (UniqueName: \"kubernetes.io/projected/8a627a17-28cd-4241-995a-6f4c6f44f816-kube-api-access-hwqfg\") pod \"8a627a17-28cd-4241-995a-6f4c6f44f816\" (UID: \"8a627a17-28cd-4241-995a-6f4c6f44f816\") "
	Jun 04 21:36:06 addons-450158 kubelet[1231]: I0604 21:36:06.713837    1231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/494be2ac-658b-4b7a-b0d5-e1d7071d7f8c-gcp-creds\") pod \"task-pv-pod\" (UID: \"494be2ac-658b-4b7a-b0d5-e1d7071d7f8c\") " pod="default/task-pv-pod"
	Jun 04 21:36:06 addons-450158 kubelet[1231]: I0604 21:36:06.713869    1231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpxvb\" (UniqueName: \"kubernetes.io/projected/494be2ac-658b-4b7a-b0d5-e1d7071d7f8c-kube-api-access-lpxvb\") pod \"task-pv-pod\" (UID: \"494be2ac-658b-4b7a-b0d5-e1d7071d7f8c\") " pod="default/task-pv-pod"
	Jun 04 21:36:06 addons-450158 kubelet[1231]: I0604 21:36:06.713892    1231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a7385bf9-2a4b-42ad-ba42-6b07c3ec603a\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^73252ae0-22ba-11ef-a358-060cabeb07e0\") pod \"task-pv-pod\" (UID: \"494be2ac-658b-4b7a-b0d5-e1d7071d7f8c\") " pod="default/task-pv-pod"
	Jun 04 21:36:06 addons-450158 kubelet[1231]: I0604 21:36:06.716682    1231 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a627a17-28cd-4241-995a-6f4c6f44f816-kube-api-access-hwqfg" (OuterVolumeSpecName: "kube-api-access-hwqfg") pod "8a627a17-28cd-4241-995a-6f4c6f44f816" (UID: "8a627a17-28cd-4241-995a-6f4c6f44f816"). InnerVolumeSpecName "kube-api-access-hwqfg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 04 21:36:06 addons-450158 kubelet[1231]: I0604 21:36:06.814907    1231 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-hwqfg\" (UniqueName: \"kubernetes.io/projected/8a627a17-28cd-4241-995a-6f4c6f44f816-kube-api-access-hwqfg\") on node \"addons-450158\" DevicePath \"\""
	Jun 04 21:36:06 addons-450158 kubelet[1231]: I0604 21:36:06.831563    1231 operation_generator.go:664] "MountVolume.MountDevice succeeded for volume \"pvc-a7385bf9-2a4b-42ad-ba42-6b07c3ec603a\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^73252ae0-22ba-11ef-a358-060cabeb07e0\") pod \"task-pv-pod\" (UID: \"494be2ac-658b-4b7a-b0d5-e1d7071d7f8c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/3d2e918500f3593c5a197113f22dfeab990ada47192127bdd7ae53f32018330d/globalmount\"" pod="default/task-pv-pod"
	Jun 04 21:36:07 addons-450158 kubelet[1231]: I0604 21:36:07.466705    1231 scope.go:117] "RemoveContainer" containerID="737d91e0b7c78d97e5bd5411afff686bc870d35f328c749b55b36248f4422e26"
	Jun 04 21:36:07 addons-450158 kubelet[1231]: I0604 21:36:07.806973    1231 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="095344ac-eb93-473e-992b-e4b40d5343a5" path="/var/lib/kubelet/pods/095344ac-eb93-473e-992b-e4b40d5343a5/volumes"
	Jun 04 21:36:07 addons-450158 kubelet[1231]: I0604 21:36:07.808081    1231 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a627a17-28cd-4241-995a-6f4c6f44f816" path="/var/lib/kubelet/pods/8a627a17-28cd-4241-995a-6f4c6f44f816/volumes"
	Jun 04 21:36:07 addons-450158 kubelet[1231]: I0604 21:36:07.808576    1231 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4cc40ea-9c44-4907-96be-00b380775884" path="/var/lib/kubelet/pods/d4cc40ea-9c44-4907-96be-00b380775884/volumes"
	
	
	==> storage-provisioner [675b0f9fd8cc58b11f79a358c9f69eeaaa6b155b0e1301afbc84b49a9e7db4fc] <==
	I0604 21:32:18.561282       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0604 21:32:18.592388       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0604 21:32:18.592449       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0604 21:32:18.613353       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0604 21:32:18.613788       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-450158_d060d893-5fe5-4b0d-9370-7f71de924184!
	I0604 21:32:18.615164       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8a9a8e12-cba4-414f-9b04-1ff8a9e0f1c4", APIVersion:"v1", ResourceVersion:"629", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-450158_d060d893-5fe5-4b0d-9370-7f71de924184 became leader
	I0604 21:32:18.715347       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-450158_d060d893-5fe5-4b0d-9370-7f71de924184!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-450158 -n addons-450158
helpers_test.go:261: (dbg) Run:  kubectl --context addons-450158 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx task-pv-pod ingress-nginx-admission-create-5zgpz ingress-nginx-admission-patch-2ldqt volcano-admission-init-hfn6l
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/InspektorGadget]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-450158 describe pod nginx task-pv-pod ingress-nginx-admission-create-5zgpz ingress-nginx-admission-patch-2ldqt volcano-admission-init-hfn6l
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-450158 describe pod nginx task-pv-pod ingress-nginx-admission-create-5zgpz ingress-nginx-admission-patch-2ldqt volcano-admission-init-hfn6l: exit status 1 (76.655341ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-450158/192.168.39.110
	Start Time:       Tue, 04 Jun 2024 21:36:03 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zkhxn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zkhxn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  6s    default-scheduler  Successfully assigned default/nginx to addons-450158
	  Normal  Pulling    5s    kubelet            Pulling image "docker.io/nginx:alpine"
	  Normal  Pulled     1s    kubelet            Successfully pulled image "docker.io/nginx:alpine" in 4.011s (4.012s including waiting). Image size: 20467161 bytes.
	  Normal  Created    1s    kubelet            Created container nginx
	  Normal  Started    1s    kubelet            Started container nginx
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-450158/192.168.39.110
	Start Time:       Tue, 04 Jun 2024 21:36:06 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lpxvb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-lpxvb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/task-pv-pod to addons-450158
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-5zgpz" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2ldqt" not found
	Error from server (NotFound): pods "volcano-admission-init-hfn6l" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-450158 describe pod nginx task-pv-pod ingress-nginx-admission-create-5zgpz ingress-nginx-admission-patch-2ldqt volcano-admission-init-hfn6l: exit status 1
--- FAIL: TestAddons/parallel/InspektorGadget (8.04s)

                                                
                                    

Test pass (289/326)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 78.87
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.1/json-events 15.21
13 TestDownloadOnly/v1.30.1/preload-exists 0
17 TestDownloadOnly/v1.30.1/LogsDuration 0.06
18 TestDownloadOnly/v1.30.1/DeleteAll 0.12
19 TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds 0.11
21 TestBinaryMirror 0.54
22 TestOffline 122.91
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 272.7
29 TestAddons/parallel/Registry 17.85
30 TestAddons/parallel/Ingress 23.05
32 TestAddons/parallel/MetricsServer 6.92
33 TestAddons/parallel/HelmTiller 18.98
35 TestAddons/parallel/CSI 38.23
36 TestAddons/parallel/Headlamp 15.96
37 TestAddons/parallel/CloudSpanner 5.54
38 TestAddons/parallel/LocalPath 62.93
39 TestAddons/parallel/NvidiaDevicePlugin 6.64
40 TestAddons/parallel/Yakd 6.01
41 TestAddons/parallel/Volcano 36.21
44 TestAddons/serial/GCPAuth/Namespaces 0.11
45 TestAddons/StoppedEnableDisable 92.66
46 TestCertOptions 50.2
47 TestCertExpiration 304.84
49 TestForceSystemdFlag 102.9
50 TestForceSystemdEnv 43.99
52 TestKVMDriverInstallOrUpdate 12.28
56 TestErrorSpam/setup 43.18
57 TestErrorSpam/start 0.32
58 TestErrorSpam/status 0.69
59 TestErrorSpam/pause 1.46
60 TestErrorSpam/unpause 1.53
61 TestErrorSpam/stop 4.16
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 95.91
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 25.45
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.45
73 TestFunctional/serial/CacheCmd/cache/add_local 2.89
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.2
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.58
78 TestFunctional/serial/CacheCmd/cache/delete 0.08
79 TestFunctional/serial/MinikubeKubectlCmd 0.09
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
81 TestFunctional/serial/ExtraConfig 44.55
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 1.38
84 TestFunctional/serial/LogsFileCmd 1.36
85 TestFunctional/serial/InvalidService 4.2
87 TestFunctional/parallel/ConfigCmd 0.28
88 TestFunctional/parallel/DashboardCmd 14.62
89 TestFunctional/parallel/DryRun 0.29
90 TestFunctional/parallel/InternationalLanguage 0.13
91 TestFunctional/parallel/StatusCmd 1.05
95 TestFunctional/parallel/ServiceCmdConnect 11.57
96 TestFunctional/parallel/AddonsCmd 0.12
97 TestFunctional/parallel/PersistentVolumeClaim 47.38
99 TestFunctional/parallel/SSHCmd 0.39
100 TestFunctional/parallel/CpCmd 1.16
101 TestFunctional/parallel/MySQL 37.4
102 TestFunctional/parallel/FileSync 0.28
103 TestFunctional/parallel/CertSync 1.39
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
111 TestFunctional/parallel/License 0.8
121 TestFunctional/parallel/ServiceCmd/DeployApp 10.23
122 TestFunctional/parallel/ProfileCmd/profile_not_create 0.29
123 TestFunctional/parallel/ProfileCmd/profile_list 0.26
124 TestFunctional/parallel/ProfileCmd/profile_json_output 0.28
125 TestFunctional/parallel/MountCmd/any-port 8.54
126 TestFunctional/parallel/MountCmd/specific-port 1.71
127 TestFunctional/parallel/ServiceCmd/List 0.29
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.26
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.27
130 TestFunctional/parallel/ServiceCmd/Format 0.29
131 TestFunctional/parallel/ServiceCmd/URL 0.35
132 TestFunctional/parallel/MountCmd/VerifyCleanup 0.85
133 TestFunctional/parallel/Version/short 0.04
134 TestFunctional/parallel/Version/components 0.57
135 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
136 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
137 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
138 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
139 TestFunctional/parallel/ImageCommands/ImageBuild 6.3
140 TestFunctional/parallel/ImageCommands/Setup 3.03
141 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
142 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
143 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
144 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.51
145 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.16
146 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.95
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.04
148 TestFunctional/parallel/ImageCommands/ImageRemove 0.89
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.64
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.1
151 TestFunctional/delete_addon-resizer_images 0.06
152 TestFunctional/delete_my-image_image 0.01
153 TestFunctional/delete_minikube_cached_images 0.01
157 TestMultiControlPlane/serial/StartCluster 282.11
158 TestMultiControlPlane/serial/DeployApp 6.94
159 TestMultiControlPlane/serial/PingHostFromPods 1.11
160 TestMultiControlPlane/serial/AddWorkerNode 45.09
161 TestMultiControlPlane/serial/NodeLabels 0.06
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.52
163 TestMultiControlPlane/serial/CopyFile 12.47
164 TestMultiControlPlane/serial/StopSecondaryNode 92.26
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.39
166 TestMultiControlPlane/serial/RestartSecondaryNode 36.84
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.53
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 483.92
169 TestMultiControlPlane/serial/DeleteSecondaryNode 7.62
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.36
171 TestMultiControlPlane/serial/StopCluster 274.68
172 TestMultiControlPlane/serial/RestartCluster 157.45
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.36
174 TestMultiControlPlane/serial/AddSecondaryNode 69.3
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.52
179 TestJSONOutput/start/Command 61.67
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.71
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.59
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 6.53
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.18
207 TestMainNoArgs 0.04
208 TestMinikubeProfile 90.9
211 TestMountStart/serial/StartWithMountFirst 28.15
212 TestMountStart/serial/VerifyMountFirst 0.37
213 TestMountStart/serial/StartWithMountSecond 33.18
214 TestMountStart/serial/VerifyMountSecond 0.35
215 TestMountStart/serial/DeleteFirst 0.66
216 TestMountStart/serial/VerifyMountPostDelete 0.36
217 TestMountStart/serial/Stop 1.28
218 TestMountStart/serial/RestartStopped 24.04
219 TestMountStart/serial/VerifyMountPostStop 0.35
222 TestMultiNode/serial/FreshStart2Nodes 105.27
223 TestMultiNode/serial/DeployApp2Nodes 6.55
224 TestMultiNode/serial/PingHostFrom2Pods 0.76
225 TestMultiNode/serial/AddNode 40.38
226 TestMultiNode/serial/MultiNodeLabels 0.06
227 TestMultiNode/serial/ProfileList 0.22
228 TestMultiNode/serial/CopyFile 7
229 TestMultiNode/serial/StopNode 2.23
230 TestMultiNode/serial/StartAfterStop 23.55
231 TestMultiNode/serial/RestartKeepsNodes 288.63
232 TestMultiNode/serial/DeleteNode 2.27
233 TestMultiNode/serial/StopMultiNode 183.11
234 TestMultiNode/serial/RestartMultiNode 77.78
235 TestMultiNode/serial/ValidateNameConflict 41.86
240 TestPreload 393.11
242 TestScheduledStopUnix 112.5
246 TestRunningBinaryUpgrade 205.33
248 TestKubernetesUpgrade 162.78
253 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
255 TestNoKubernetes/serial/StartWithK8s 91.14
260 TestNetworkPlugins/group/false 2.76
264 TestNoKubernetes/serial/StartWithStopK8s 56.24
265 TestNoKubernetes/serial/Start 52.2
266 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
267 TestNoKubernetes/serial/ProfileList 1.11
268 TestNoKubernetes/serial/Stop 1.71
269 TestNoKubernetes/serial/StartNoArgs 40.32
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
271 TestStoppedBinaryUpgrade/Setup 3.25
272 TestStoppedBinaryUpgrade/Upgrade 188.25
281 TestPause/serial/Start 140.29
282 TestNetworkPlugins/group/auto/Start 121.65
283 TestNetworkPlugins/group/kindnet/Start 116.53
284 TestStoppedBinaryUpgrade/MinikubeLogs 0.88
285 TestNetworkPlugins/group/calico/Start 90.02
286 TestNetworkPlugins/group/auto/KubeletFlags 0.21
287 TestNetworkPlugins/group/auto/NetCatPod 10.31
288 TestNetworkPlugins/group/auto/DNS 0.16
289 TestNetworkPlugins/group/auto/Localhost 0.13
290 TestNetworkPlugins/group/auto/HairPin 0.12
291 TestPause/serial/SecondStartNoReconfiguration 39.18
292 TestNetworkPlugins/group/custom-flannel/Start 85.31
293 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
294 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
295 TestNetworkPlugins/group/kindnet/NetCatPod 10.25
296 TestNetworkPlugins/group/kindnet/DNS 0.17
297 TestNetworkPlugins/group/kindnet/Localhost 0.13
298 TestNetworkPlugins/group/kindnet/HairPin 0.14
299 TestPause/serial/Pause 0.78
300 TestPause/serial/VerifyStatus 0.26
301 TestPause/serial/Unpause 0.74
302 TestPause/serial/PauseAgain 1.02
303 TestPause/serial/DeletePaused 1.08
304 TestPause/serial/VerifyDeletedResources 0.54
305 TestNetworkPlugins/group/enable-default-cni/Start 66.37
306 TestNetworkPlugins/group/flannel/Start 104.55
307 TestNetworkPlugins/group/calico/ControllerPod 6.01
308 TestNetworkPlugins/group/calico/KubeletFlags 0.23
309 TestNetworkPlugins/group/calico/NetCatPod 11.24
310 TestNetworkPlugins/group/calico/DNS 0.16
311 TestNetworkPlugins/group/calico/Localhost 0.21
312 TestNetworkPlugins/group/calico/HairPin 0.13
313 TestNetworkPlugins/group/bridge/Start 62.05
314 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
315 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.33
316 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
317 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.26
318 TestNetworkPlugins/group/custom-flannel/DNS 0.17
319 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
320 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
321 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
322 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
323 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
325 TestStartStop/group/old-k8s-version/serial/FirstStart 185.48
327 TestStartStop/group/embed-certs/serial/FirstStart 128.63
328 TestNetworkPlugins/group/flannel/ControllerPod 6.01
329 TestNetworkPlugins/group/bridge/KubeletFlags 0.19
330 TestNetworkPlugins/group/bridge/NetCatPod 10.22
331 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
332 TestNetworkPlugins/group/flannel/NetCatPod 9.76
333 TestNetworkPlugins/group/bridge/DNS 32.84
334 TestNetworkPlugins/group/flannel/DNS 0.21
335 TestNetworkPlugins/group/flannel/Localhost 0.17
336 TestNetworkPlugins/group/flannel/HairPin 0.14
338 TestStartStop/group/no-preload/serial/FirstStart 113.14
339 TestNetworkPlugins/group/bridge/Localhost 0.16
340 TestNetworkPlugins/group/bridge/HairPin 0.14
342 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 63.69
343 TestStartStop/group/embed-certs/serial/DeployApp 11.33
344 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.08
345 TestStartStop/group/embed-certs/serial/Stop 91.75
346 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.27
347 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.98
348 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.67
349 TestStartStop/group/no-preload/serial/DeployApp 11.28
350 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.96
351 TestStartStop/group/no-preload/serial/Stop 91.67
352 TestStartStop/group/old-k8s-version/serial/DeployApp 10.42
353 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.91
354 TestStartStop/group/old-k8s-version/serial/Stop 91.82
355 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
356 TestStartStop/group/embed-certs/serial/SecondStart 321.49
357 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
358 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 319.33
359 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
360 TestStartStop/group/no-preload/serial/SecondStart 322.92
361 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
362 TestStartStop/group/old-k8s-version/serial/SecondStart 518.22
363 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
364 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
365 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
366 TestStartStop/group/embed-certs/serial/Pause 2.99
367 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 18.01
369 TestStartStop/group/newest-cni/serial/FirstStart 61
370 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
371 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
372 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.56
373 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
374 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
375 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
376 TestStartStop/group/no-preload/serial/Pause 3.41
377 TestStartStop/group/newest-cni/serial/DeployApp 0
378 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.99
379 TestStartStop/group/newest-cni/serial/Stop 6.64
380 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
381 TestStartStop/group/newest-cni/serial/SecondStart 32.28
382 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
383 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
384 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
385 TestStartStop/group/newest-cni/serial/Pause 2.38
386 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
387 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
388 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.21
389 TestStartStop/group/old-k8s-version/serial/Pause 2.31
x
+
TestDownloadOnly/v1.20.0/json-events (78.87s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-543481 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-543481 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (1m18.873748256s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (78.87s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-543481
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-543481: exit status 85 (59.735787ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-543481 | jenkins | v1.33.1 | 04 Jun 24 21:29 UTC |          |
	|         | -p download-only-543481        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/04 21:29:39
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0604 21:29:39.286610   13307 out.go:291] Setting OutFile to fd 1 ...
	I0604 21:29:39.286853   13307 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 21:29:39.286863   13307 out.go:304] Setting ErrFile to fd 2...
	I0604 21:29:39.286867   13307 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 21:29:39.287089   13307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19024-5817/.minikube/bin
	W0604 21:29:39.287267   13307 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19024-5817/.minikube/config/config.json: open /home/jenkins/minikube-integration/19024-5817/.minikube/config/config.json: no such file or directory
	I0604 21:29:39.287850   13307 out.go:298] Setting JSON to true
	I0604 21:29:39.288718   13307 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":720,"bootTime":1717535859,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0604 21:29:39.288769   13307 start.go:139] virtualization: kvm guest
	I0604 21:29:39.291133   13307 out.go:97] [download-only-543481] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0604 21:29:39.292611   13307 out.go:169] MINIKUBE_LOCATION=19024
	W0604 21:29:39.291216   13307 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/19024-5817/.minikube/cache/preloaded-tarball: no such file or directory
	I0604 21:29:39.291245   13307 notify.go:220] Checking for updates...
	I0604 21:29:39.295251   13307 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 21:29:39.296815   13307 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19024-5817/kubeconfig
	I0604 21:29:39.298228   13307 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19024-5817/.minikube
	I0604 21:29:39.299571   13307 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0604 21:29:39.302039   13307 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0604 21:29:39.302316   13307 driver.go:392] Setting default libvirt URI to qemu:///system
	I0604 21:29:39.401219   13307 out.go:97] Using the kvm2 driver based on user configuration
	I0604 21:29:39.401244   13307 start.go:297] selected driver: kvm2
	I0604 21:29:39.401250   13307 start.go:901] validating driver "kvm2" against <nil>
	I0604 21:29:39.401548   13307 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 21:29:39.401656   13307 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19024-5817/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0604 21:29:39.415870   13307 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0604 21:29:39.415916   13307 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0604 21:29:39.416375   13307 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0604 21:29:39.416547   13307 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0604 21:29:39.416607   13307 cni.go:84] Creating CNI manager for ""
	I0604 21:29:39.416620   13307 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0604 21:29:39.416630   13307 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0604 21:29:39.416695   13307 start.go:340] cluster config:
	{Name:download-only-543481 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-543481 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0604 21:29:39.416864   13307 iso.go:125] acquiring lock: {Name:mkda4cefdbcc254212dd1652a198fa2930d04a2a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 21:29:39.418600   13307 out.go:97] Downloading VM boot image ...
	I0604 21:29:39.418643   13307 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19024-5817/.minikube/cache/iso/amd64/minikube-v1.33.1-1717518792-19024-amd64.iso
	I0604 21:29:53.098977   13307 out.go:97] Starting "download-only-543481" primary control-plane node in "download-only-543481" cluster
	I0604 21:29:53.099005   13307 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0604 21:29:53.252125   13307 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0604 21:29:53.252154   13307 cache.go:56] Caching tarball of preloaded images
	I0604 21:29:53.252316   13307 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0604 21:29:53.254358   13307 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0604 21:29:53.254375   13307 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0604 21:29:53.406587   13307 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/19024-5817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0604 21:30:17.436342   13307 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0604 21:30:17.436465   13307 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/19024-5817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0604 21:30:18.330896   13307 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0604 21:30:18.331360   13307 profile.go:143] Saving config to /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/download-only-543481/config.json ...
	I0604 21:30:18.331410   13307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/download-only-543481/config.json: {Name:mka9d54369ea95cb1062ab6700f740fa31c9389b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 21:30:18.331603   13307 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0604 21:30:18.331829   13307 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19024-5817/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-543481 host does not exist
	  To start a cluster, run: "minikube start -p download-only-543481"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-543481
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/json-events (15.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-645945 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-645945 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (15.207213742s)
--- PASS: TestDownloadOnly/v1.30.1/json-events (15.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/preload-exists
--- PASS: TestDownloadOnly/v1.30.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-645945
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-645945: exit status 85 (55.335987ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-543481 | jenkins | v1.33.1 | 04 Jun 24 21:29 UTC |                     |
	|         | -p download-only-543481        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 04 Jun 24 21:30 UTC | 04 Jun 24 21:30 UTC |
	| delete  | -p download-only-543481        | download-only-543481 | jenkins | v1.33.1 | 04 Jun 24 21:30 UTC | 04 Jun 24 21:30 UTC |
	| start   | -o=json --download-only        | download-only-645945 | jenkins | v1.33.1 | 04 Jun 24 21:30 UTC |                     |
	|         | -p download-only-645945        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/04 21:30:58
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0604 21:30:58.466754   13728 out.go:291] Setting OutFile to fd 1 ...
	I0604 21:30:58.466999   13728 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 21:30:58.467010   13728 out.go:304] Setting ErrFile to fd 2...
	I0604 21:30:58.467015   13728 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 21:30:58.467186   13728 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19024-5817/.minikube/bin
	I0604 21:30:58.467707   13728 out.go:298] Setting JSON to true
	I0604 21:30:58.468549   13728 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":799,"bootTime":1717535859,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0604 21:30:58.468604   13728 start.go:139] virtualization: kvm guest
	I0604 21:30:58.470668   13728 out.go:97] [download-only-645945] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0604 21:30:58.472264   13728 out.go:169] MINIKUBE_LOCATION=19024
	I0604 21:30:58.470802   13728 notify.go:220] Checking for updates...
	I0604 21:30:58.475004   13728 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 21:30:58.476489   13728 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19024-5817/kubeconfig
	I0604 21:30:58.477888   13728 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19024-5817/.minikube
	I0604 21:30:58.479525   13728 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0604 21:30:58.482032   13728 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0604 21:30:58.482256   13728 driver.go:392] Setting default libvirt URI to qemu:///system
	I0604 21:30:58.513298   13728 out.go:97] Using the kvm2 driver based on user configuration
	I0604 21:30:58.513364   13728 start.go:297] selected driver: kvm2
	I0604 21:30:58.513372   13728 start.go:901] validating driver "kvm2" against <nil>
	I0604 21:30:58.513665   13728 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 21:30:58.513787   13728 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19024-5817/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0604 21:30:58.528031   13728 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0604 21:30:58.528070   13728 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0604 21:30:58.528550   13728 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0604 21:30:58.528706   13728 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0604 21:30:58.528758   13728 cni.go:84] Creating CNI manager for ""
	I0604 21:30:58.528769   13728 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0604 21:30:58.528781   13728 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0604 21:30:58.528829   13728 start.go:340] cluster config:
	{Name:download-only-645945 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:download-only-645945 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0604 21:30:58.528937   13728 iso.go:125] acquiring lock: {Name:mkda4cefdbcc254212dd1652a198fa2930d04a2a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 21:30:58.530646   13728 out.go:97] Starting "download-only-645945" primary control-plane node in "download-only-645945" cluster
	I0604 21:30:58.530665   13728 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime containerd
	I0604 21:30:58.678795   13728 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-containerd-overlay2-amd64.tar.lz4
	I0604 21:30:58.678821   13728 cache.go:56] Caching tarball of preloaded images
	I0604 21:30:58.678980   13728 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime containerd
	I0604 21:30:58.680730   13728 out.go:97] Downloading Kubernetes v1.30.1 preload ...
	I0604 21:30:58.680747   13728 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.1-containerd-overlay2-amd64.tar.lz4 ...
	I0604 21:30:58.834446   13728 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-containerd-overlay2-amd64.tar.lz4?checksum=md5:f7f0b156c28affaf1ce61807d10333ee -> /home/jenkins/minikube-integration/19024-5817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-containerd-overlay2-amd64.tar.lz4
	I0604 21:31:11.697262   13728 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.1-containerd-overlay2-amd64.tar.lz4 ...
	I0604 21:31:11.697350   13728 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/19024-5817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-containerd-overlay2-amd64.tar.lz4 ...
	I0604 21:31:12.436538   13728 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on containerd
	I0604 21:31:12.436887   13728 profile.go:143] Saving config to /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/download-only-645945/config.json ...
	I0604 21:31:12.436915   13728 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/download-only-645945/config.json: {Name:mk8a312e39da91190e75c29561e7f48f840fcf6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 21:31:12.437086   13728 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime containerd
	I0604 21:31:12.437234   13728 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19024-5817/.minikube/cache/linux/amd64/v1.30.1/kubectl
	
	
	* The control-plane node download-only-645945 host does not exist
	  To start a cluster, run: "minikube start -p download-only-645945"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.1/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-645945
--- PASS: TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-460204 --alsologtostderr --binary-mirror http://127.0.0.1:40673 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-460204" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-460204
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestOffline (122.91s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-898790 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-898790 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (2m1.933997332s)
helpers_test.go:175: Cleaning up "offline-containerd-898790" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-898790
--- PASS: TestOffline (122.91s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-450158
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-450158: exit status 85 (47.560994ms)

                                                
                                                
-- stdout --
	* Profile "addons-450158" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-450158"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-450158
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-450158: exit status 85 (51.349195ms)

                                                
                                                
-- stdout --
	* Profile "addons-450158" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-450158"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (272.7s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-450158 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-450158 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (4m32.700425735s)
--- PASS: TestAddons/Setup (272.70s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 28.985845ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-jv5wd" [d4cc40ea-9c44-4907-96be-00b380775884] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.015824046s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-8w8gc" [095344ac-eb93-473e-992b-e4b40d5343a5] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004294855s
addons_test.go:342: (dbg) Run:  kubectl --context addons-450158 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-450158 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-450158 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.041324071s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-450158 ip
2024/06/04 21:36:04 [DEBUG] GET http://192.168.39.110:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-450158 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.85s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (23.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-450158 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-450158 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-450158 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ae699f7f-9e2d-47c8-b298-7bab36294d94] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ae699f7f-9e2d-47c8-b298-7bab36294d94] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.006217484s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-450158 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-450158 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-450158 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.110
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-450158 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-450158 addons disable ingress-dns --alsologtostderr -v=1: (1.983080664s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-450158 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-450158 addons disable ingress --alsologtostderr -v=1: (7.835405559s)
--- PASS: TestAddons/parallel/Ingress (23.05s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.92s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 5.161497ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-7kq77" [8317cea0-7258-483a-be4d-2432c8e2cb4e] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.07470832s
addons_test.go:417: (dbg) Run:  kubectl --context addons-450158 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-450158 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.92s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (18.98s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 29.127665ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-8wdfb" [8a627a17-28cd-4241-995a-6f4c6f44f816] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.013185839s
addons_test.go:475: (dbg) Run:  kubectl --context addons-450158 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-450158 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (10.086138998s)
addons_test.go:480: kubectl --context addons-450158 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: error stream protocol error: unknown error
addons_test.go:475: (dbg) Run:  kubectl --context addons-450158 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-450158 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (2.498973577s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-450158 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (18.98s)

                                                
                                    
x
+
TestAddons/parallel/CSI (38.23s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 4.802589ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-450158 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-450158 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-450158 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-450158 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [494be2ac-658b-4b7a-b0d5-e1d7071d7f8c] Pending
helpers_test.go:344: "task-pv-pod" [494be2ac-658b-4b7a-b0d5-e1d7071d7f8c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [494be2ac-658b-4b7a-b0d5-e1d7071d7f8c] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.004218007s
addons_test.go:586: (dbg) Run:  kubectl --context addons-450158 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-450158 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-450158 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-450158 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-450158 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-450158 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-450158 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-450158 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-450158 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-450158 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-450158 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-450158 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [bcf76b79-aa65-4e5b-9e4d-0e3a7ff604ae] Pending
helpers_test.go:344: "task-pv-pod-restore" [bcf76b79-aa65-4e5b-9e4d-0e3a7ff604ae] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [bcf76b79-aa65-4e5b-9e4d-0e3a7ff604ae] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.038785657s
addons_test.go:628: (dbg) Run:  kubectl --context addons-450158 delete pod task-pv-pod-restore
addons_test.go:628: (dbg) Done: kubectl --context addons-450158 delete pod task-pv-pod-restore: (1.64443542s)
addons_test.go:632: (dbg) Run:  kubectl --context addons-450158 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-450158 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-linux-amd64 -p addons-450158 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-linux-amd64 -p addons-450158 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.6520244s)
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-450158 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (38.23s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-450158 --alsologtostderr -v=1
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7fc69f7444-rtj6l" [96b97db5-b6e4-4ad5-b158-c20a1ccea2fb] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7fc69f7444-rtj6l" [96b97db5-b6e4-4ad5-b158-c20a1ccea2fb] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7fc69f7444-rtj6l" [96b97db5-b6e4-4ad5-b158-c20a1ccea2fb] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.003531735s
--- PASS: TestAddons/parallel/Headlamp (15.96s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-79sdj" [b6e9ad73-338e-4dca-be68-e0edf21c5552] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003477311s
addons_test.go:862: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-450158
--- PASS: TestAddons/parallel/CloudSpanner (5.54s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (62.93s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-450158 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-450158 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-450158 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-450158 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-450158 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-450158 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-450158 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-450158 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-450158 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-450158 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-450158 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-450158 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-450158 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-450158 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-450158 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7b885de7-49a8-4634-bf1f-66f1edb98ec2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [7b885de7-49a8-4634-bf1f-66f1edb98ec2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [7b885de7-49a8-4634-bf1f-66f1edb98ec2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.004225225s
addons_test.go:992: (dbg) Run:  kubectl --context addons-450158 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-linux-amd64 -p addons-450158 ssh "cat /opt/local-path-provisioner/pvc-1fdb6661-4e03-4271-9ce1-fddb0a943cc8_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-450158 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-450158 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-linux-amd64 -p addons-450158 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-linux-amd64 -p addons-450158 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (44.146764274s)
--- PASS: TestAddons/parallel/LocalPath (62.93s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.64s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-974wq" [98dfc179-56fb-4106-861e-9efeae4bfef4] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005574595s
addons_test.go:1056: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-450158
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.64s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-scdnc" [94018798-7d90-4e36-85b7-7b3eb00a3ed8] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004648201s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (36.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:905: volcano-controller stabilized in 11.020038ms
addons_test.go:897: volcano-admission stabilized in 15.348926ms
addons_test.go:889: volcano-scheduler stabilized in 15.627684ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-765f888978-2mtmf" [bbd3db3c-5bd6-4f45-9b06-067ce3603741] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 6.003791313s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-7b497cf95b-sfsq5" [d8c194f2-0c3d-44cc-a9fa-8d089ae5cefb] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 5.005564366s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controller-86c5446455-99dhh" [bff8f948-d6c7-4f2a-ae91-0cd3b1259505] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 5.004822615s
addons_test.go:924: (dbg) Run:  kubectl --context addons-450158 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-450158 create -f testdata/vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-450158 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [b9a08fb9-1de8-47c3-96ff-ee2277d04d7f] Pending
helpers_test.go:344: "test-job-nginx-0" [b9a08fb9-1de8-47c3-96ff-ee2277d04d7f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [b9a08fb9-1de8-47c3-96ff-ee2277d04d7f] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 10.004340133s
addons_test.go:960: (dbg) Run:  out/minikube-linux-amd64 -p addons-450158 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-linux-amd64 -p addons-450158 addons disable volcano --alsologtostderr -v=1: (9.803664625s)
--- PASS: TestAddons/parallel/Volcano (36.21s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-450158 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-450158 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.66s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-450158
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-450158: (1m32.408644586s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-450158
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-450158
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-450158
--- PASS: TestAddons/StoppedEnableDisable (92.66s)

                                                
                                    
x
+
TestCertOptions (50.2s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-398620 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-398620 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (48.576536991s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-398620 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-398620 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-398620 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-398620" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-398620
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-398620: (1.122685226s)
--- PASS: TestCertOptions (50.20s)

                                                
                                    
x
+
TestCertExpiration (304.84s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-440633 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-440633 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m48.969447772s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-440633 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-440633 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (15.040617861s)
helpers_test.go:175: Cleaning up "cert-expiration-440633" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-440633
--- PASS: TestCertExpiration (304.84s)

                                                
                                    
x
+
TestForceSystemdFlag (102.9s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-608094 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-608094 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m41.688567102s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-608094 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-608094" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-608094
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-608094: (1.001398563s)
--- PASS: TestForceSystemdFlag (102.90s)

                                                
                                    
x
+
TestForceSystemdEnv (43.99s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-948969 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-948969 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (43.03702699s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-948969 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-948969" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-948969
--- PASS: TestForceSystemdEnv (43.99s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (12.28s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (12.28s)

                                                
                                    
x
+
TestErrorSpam/setup (43.18s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-158833 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-158833 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-158833 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-158833 --driver=kvm2  --container-runtime=containerd: (43.178280164s)
--- PASS: TestErrorSpam/setup (43.18s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-158833 --log_dir /tmp/nospam-158833 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-158833 --log_dir /tmp/nospam-158833 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-158833 --log_dir /tmp/nospam-158833 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-158833 --log_dir /tmp/nospam-158833 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-158833 --log_dir /tmp/nospam-158833 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-158833 --log_dir /tmp/nospam-158833 status
--- PASS: TestErrorSpam/status (0.69s)

                                                
                                    
x
+
TestErrorSpam/pause (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-158833 --log_dir /tmp/nospam-158833 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-158833 --log_dir /tmp/nospam-158833 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-158833 --log_dir /tmp/nospam-158833 pause
--- PASS: TestErrorSpam/pause (1.46s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-158833 --log_dir /tmp/nospam-158833 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-158833 --log_dir /tmp/nospam-158833 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-158833 --log_dir /tmp/nospam-158833 unpause
--- PASS: TestErrorSpam/unpause (1.53s)

                                                
                                    
x
+
TestErrorSpam/stop (4.16s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-158833 --log_dir /tmp/nospam-158833 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-158833 --log_dir /tmp/nospam-158833 stop: (1.419029083s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-158833 --log_dir /tmp/nospam-158833 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-158833 --log_dir /tmp/nospam-158833 stop: (1.280556633s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-158833 --log_dir /tmp/nospam-158833 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-158833 --log_dir /tmp/nospam-158833 stop: (1.459221118s)
--- PASS: TestErrorSpam/stop (4.16s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19024-5817/.minikube/files/etc/test/nested/copy/13295/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (95.91s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-882970 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0604 21:40:47.452684   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt: no such file or directory
E0604 21:40:47.458617   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt: no such file or directory
E0604 21:40:47.468865   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt: no such file or directory
E0604 21:40:47.489118   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt: no such file or directory
E0604 21:40:47.529361   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt: no such file or directory
E0604 21:40:47.609674   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt: no such file or directory
E0604 21:40:47.770130   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt: no such file or directory
E0604 21:40:48.090663   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt: no such file or directory
E0604 21:40:48.731793   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt: no such file or directory
E0604 21:40:50.012028   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt: no such file or directory
E0604 21:40:52.573378   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt: no such file or directory
E0604 21:40:57.694063   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt: no such file or directory
E0604 21:41:07.934295   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt: no such file or directory
E0604 21:41:28.414946   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-882970 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m35.909686863s)
--- PASS: TestFunctional/serial/StartWithProxy (95.91s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (25.45s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-882970 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-882970 --alsologtostderr -v=8: (25.444271345s)
functional_test.go:659: soft start took 25.444880105s for "functional-882970" cluster.
--- PASS: TestFunctional/serial/SoftStart (25.45s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-882970 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-882970 cache add registry.k8s.io/pause:3.1: (1.134876016s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-882970 cache add registry.k8s.io/pause:3.3: (1.152484749s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-882970 cache add registry.k8s.io/pause:latest: (1.159620779s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-882970 /tmp/TestFunctionalserialCacheCmdcacheadd_local3268527628/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 cache add minikube-local-cache-test:functional-882970
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-882970 cache add minikube-local-cache-test:functional-882970: (2.558299163s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 cache delete minikube-local-cache-test:functional-882970
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-882970
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882970 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (195.422722ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 kubectl -- --context functional-882970 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-882970 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.55s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-882970 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0604 21:42:09.375858   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-882970 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.550041006s)
functional_test.go:757: restart took 44.55015174s for "functional-882970" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (44.55s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-882970 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-882970 logs: (1.379198374s)
--- PASS: TestFunctional/serial/LogsCmd (1.38s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 logs --file /tmp/TestFunctionalserialLogsFileCmd347575108/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-882970 logs --file /tmp/TestFunctionalserialLogsFileCmd347575108/001/logs.txt: (1.356151891s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.2s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-882970 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-882970
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-882970: exit status 115 (259.331431ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.126:32007 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-882970 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.20s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882970 config get cpus: exit status 14 (46.244474ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882970 config get cpus: exit status 14 (42.505935ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-882970 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-882970 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 21736: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.62s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-882970 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-882970 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (145.165345ms)

                                                
                                                
-- stdout --
	* [functional-882970] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19024
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19024-5817/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19024-5817/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 21:43:09.367841   21281 out.go:291] Setting OutFile to fd 1 ...
	I0604 21:43:09.368132   21281 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 21:43:09.368144   21281 out.go:304] Setting ErrFile to fd 2...
	I0604 21:43:09.368150   21281 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 21:43:09.368459   21281 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19024-5817/.minikube/bin
	I0604 21:43:09.369102   21281 out.go:298] Setting JSON to false
	I0604 21:43:09.370421   21281 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1530,"bootTime":1717535859,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0604 21:43:09.370502   21281 start.go:139] virtualization: kvm guest
	I0604 21:43:09.372677   21281 out.go:177] * [functional-882970] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0604 21:43:09.374209   21281 notify.go:220] Checking for updates...
	I0604 21:43:09.374228   21281 out.go:177]   - MINIKUBE_LOCATION=19024
	I0604 21:43:09.375918   21281 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 21:43:09.377317   21281 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19024-5817/kubeconfig
	I0604 21:43:09.378921   21281 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19024-5817/.minikube
	I0604 21:43:09.380692   21281 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0604 21:43:09.382306   21281 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0604 21:43:09.384533   21281 config.go:182] Loaded profile config "functional-882970": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
	I0604 21:43:09.385230   21281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:43:09.385331   21281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:43:09.400230   21281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45177
	I0604 21:43:09.400808   21281 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:43:09.401497   21281 main.go:141] libmachine: Using API Version  1
	I0604 21:43:09.401525   21281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:43:09.401922   21281 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:43:09.402141   21281 main.go:141] libmachine: (functional-882970) Calling .DriverName
	I0604 21:43:09.402414   21281 driver.go:392] Setting default libvirt URI to qemu:///system
	I0604 21:43:09.402835   21281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:43:09.402904   21281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:43:09.418742   21281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46305
	I0604 21:43:09.419143   21281 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:43:09.419506   21281 main.go:141] libmachine: Using API Version  1
	I0604 21:43:09.419526   21281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:43:09.419864   21281 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:43:09.420004   21281 main.go:141] libmachine: (functional-882970) Calling .DriverName
	I0604 21:43:09.453783   21281 out.go:177] * Using the kvm2 driver based on existing profile
	I0604 21:43:09.455245   21281 start.go:297] selected driver: kvm2
	I0604 21:43:09.455270   21281 start.go:901] validating driver "kvm2" against &{Name:functional-882970 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:functional-882970 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.126 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0604 21:43:09.455381   21281 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 21:43:09.457471   21281 out.go:177] 
	W0604 21:43:09.458879   21281 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0604 21:43:09.460225   21281 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-882970 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-882970 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-882970 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (131.894104ms)

                                                
                                                
-- stdout --
	* [functional-882970] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19024
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19024-5817/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19024-5817/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 21:43:09.225124   21243 out.go:291] Setting OutFile to fd 1 ...
	I0604 21:43:09.225224   21243 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 21:43:09.225232   21243 out.go:304] Setting ErrFile to fd 2...
	I0604 21:43:09.225236   21243 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 21:43:09.225495   21243 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19024-5817/.minikube/bin
	I0604 21:43:09.225985   21243 out.go:298] Setting JSON to false
	I0604 21:43:09.226861   21243 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1530,"bootTime":1717535859,"procs":250,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0604 21:43:09.226913   21243 start.go:139] virtualization: kvm guest
	I0604 21:43:09.229129   21243 out.go:177] * [functional-882970] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0604 21:43:09.231084   21243 out.go:177]   - MINIKUBE_LOCATION=19024
	I0604 21:43:09.231100   21243 notify.go:220] Checking for updates...
	I0604 21:43:09.232492   21243 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 21:43:09.233874   21243 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19024-5817/kubeconfig
	I0604 21:43:09.235172   21243 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19024-5817/.minikube
	I0604 21:43:09.236617   21243 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0604 21:43:09.237868   21243 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0604 21:43:09.239436   21243 config.go:182] Loaded profile config "functional-882970": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
	I0604 21:43:09.239801   21243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:43:09.239845   21243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:43:09.254804   21243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34615
	I0604 21:43:09.255216   21243 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:43:09.255811   21243 main.go:141] libmachine: Using API Version  1
	I0604 21:43:09.255845   21243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:43:09.256249   21243 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:43:09.256475   21243 main.go:141] libmachine: (functional-882970) Calling .DriverName
	I0604 21:43:09.256869   21243 driver.go:392] Setting default libvirt URI to qemu:///system
	I0604 21:43:09.257319   21243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:43:09.257376   21243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:43:09.272311   21243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41303
	I0604 21:43:09.272733   21243 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:43:09.273216   21243 main.go:141] libmachine: Using API Version  1
	I0604 21:43:09.273246   21243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:43:09.273605   21243 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:43:09.273799   21243 main.go:141] libmachine: (functional-882970) Calling .DriverName
	I0604 21:43:09.308050   21243 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0604 21:43:09.309411   21243 start.go:297] selected driver: kvm2
	I0604 21:43:09.309424   21243 start.go:901] validating driver "kvm2" against &{Name:functional-882970 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:functional-882970 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.126 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0604 21:43:09.309537   21243 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 21:43:09.311637   21243 out.go:177] 
	W0604 21:43:09.313154   21243 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0604 21:43:09.314666   21243 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-882970 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-882970 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-l5tw4" [f7cf282e-f7c8-4224-bee3-1ceb1d88f92c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-l5tw4" [f7cf282e-f7c8-4224-bee3-1ceb1d88f92c] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.003869402s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.126:30715
functional_test.go:1671: http://192.168.39.126:30715: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-l5tw4

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.126:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.126:30715
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.57s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (47.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [1569d1fe-77a5-435e-a559-e159fa1a0cdd] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004027823s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-882970 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-882970 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-882970 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-882970 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [021d2c5b-2093-498b-a685-e3560c9f9415] Pending
helpers_test.go:344: "sp-pod" [021d2c5b-2093-498b-a685-e3560c9f9415] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [021d2c5b-2093-498b-a685-e3560c9f9415] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.004816087s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-882970 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-882970 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-882970 delete -f testdata/storage-provisioner/pod.yaml: (1.631559925s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-882970 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [53a8a42c-6cb9-4a58-bc28-8c7104728cd3] Pending
helpers_test.go:344: "sp-pod" [53a8a42c-6cb9-4a58-bc28-8c7104728cd3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [53a8a42c-6cb9-4a58-bc28-8c7104728cd3] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.013649983s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-882970 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (47.38s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 ssh -n functional-882970 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 cp functional-882970:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3786752472/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 ssh -n functional-882970 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 ssh -n functional-882970 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (37.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-882970 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-cfflc" [3df887d2-a260-48b1-8848-c257421962c8] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-cfflc" [3df887d2-a260-48b1-8848-c257421962c8] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 32.005380099s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-882970 exec mysql-64454c8b5c-cfflc -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-882970 exec mysql-64454c8b5c-cfflc -- mysql -ppassword -e "show databases;": exit status 1 (168.585209ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-882970 exec mysql-64454c8b5c-cfflc -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-882970 exec mysql-64454c8b5c-cfflc -- mysql -ppassword -e "show databases;": exit status 1 (141.227042ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-882970 exec mysql-64454c8b5c-cfflc -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-882970 exec mysql-64454c8b5c-cfflc -- mysql -ppassword -e "show databases;": exit status 1 (129.760874ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-882970 exec mysql-64454c8b5c-cfflc -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (37.40s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/13295/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 ssh "sudo cat /etc/test/nested/copy/13295/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/13295.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 ssh "sudo cat /etc/ssl/certs/13295.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/13295.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 ssh "sudo cat /usr/share/ca-certificates/13295.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/132952.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 ssh "sudo cat /etc/ssl/certs/132952.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/132952.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 ssh "sudo cat /usr/share/ca-certificates/132952.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-882970 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882970 ssh "sudo systemctl is-active docker": exit status 1 (240.956759ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882970 ssh "sudo systemctl is-active crio": exit status 1 (221.672261ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-882970 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-882970 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-zsgpz" [7295012e-96c4-4e43-b677-465e445f75da] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-zsgpz" [7295012e-96c4-4e43-b677-465e445f75da] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.002868133s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "221.21835ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "43.440875ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "230.779149ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "43.996204ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-882970 /tmp/TestFunctionalparallelMountCmdany-port1531585212/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1717537378714958191" to /tmp/TestFunctionalparallelMountCmdany-port1531585212/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1717537378714958191" to /tmp/TestFunctionalparallelMountCmdany-port1531585212/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1717537378714958191" to /tmp/TestFunctionalparallelMountCmdany-port1531585212/001/test-1717537378714958191
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882970 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (213.008813ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jun  4 21:42 created-by-test
-rw-r--r-- 1 docker docker 24 Jun  4 21:42 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jun  4 21:42 test-1717537378714958191
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 ssh cat /mount-9p/test-1717537378714958191
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-882970 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [49248208-cdbc-4aea-b627-7988bde2a0ec] Pending
helpers_test.go:344: "busybox-mount" [49248208-cdbc-4aea-b627-7988bde2a0ec] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [49248208-cdbc-4aea-b627-7988bde2a0ec] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [49248208-cdbc-4aea-b627-7988bde2a0ec] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.005130201s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-882970 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-882970 /tmp/TestFunctionalparallelMountCmdany-port1531585212/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-882970 /tmp/TestFunctionalparallelMountCmdspecific-port3521112444/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882970 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (184.211271ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-882970 /tmp/TestFunctionalparallelMountCmdspecific-port3521112444/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882970 ssh "sudo umount -f /mount-9p": exit status 1 (240.266739ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-882970 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-882970 /tmp/TestFunctionalparallelMountCmdspecific-port3521112444/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 service list -o json
functional_test.go:1490: Took "259.599744ms" to run "out/minikube-linux-amd64 -p functional-882970 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.126:32571
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.126:32571
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-882970 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4217948525/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-882970 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4217948525/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-882970 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4217948525/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-882970 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-882970 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4217948525/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-882970 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4217948525/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-882970 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4217948525/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-882970 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.1
registry.k8s.io/kube-proxy:v1.30.1
registry.k8s.io/kube-controller-manager:v1.30.1
registry.k8s.io/kube-apiserver:v1.30.1
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-882970
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-882970
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-882970 image ls --format short --alsologtostderr:
I0604 21:43:33.452202   22553 out.go:291] Setting OutFile to fd 1 ...
I0604 21:43:33.452447   22553 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0604 21:43:33.452459   22553 out.go:304] Setting ErrFile to fd 2...
I0604 21:43:33.452465   22553 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0604 21:43:33.452690   22553 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19024-5817/.minikube/bin
I0604 21:43:33.453316   22553 config.go:182] Loaded profile config "functional-882970": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
I0604 21:43:33.453429   22553 config.go:182] Loaded profile config "functional-882970": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
I0604 21:43:33.453796   22553 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0604 21:43:33.453846   22553 main.go:141] libmachine: Launching plugin server for driver kvm2
I0604 21:43:33.468494   22553 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45987
I0604 21:43:33.468949   22553 main.go:141] libmachine: () Calling .GetVersion
I0604 21:43:33.469545   22553 main.go:141] libmachine: Using API Version  1
I0604 21:43:33.469587   22553 main.go:141] libmachine: () Calling .SetConfigRaw
I0604 21:43:33.469941   22553 main.go:141] libmachine: () Calling .GetMachineName
I0604 21:43:33.470146   22553 main.go:141] libmachine: (functional-882970) Calling .GetState
I0604 21:43:33.472112   22553 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0604 21:43:33.472162   22553 main.go:141] libmachine: Launching plugin server for driver kvm2
I0604 21:43:33.487865   22553 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36345
I0604 21:43:33.488251   22553 main.go:141] libmachine: () Calling .GetVersion
I0604 21:43:33.488741   22553 main.go:141] libmachine: Using API Version  1
I0604 21:43:33.488764   22553 main.go:141] libmachine: () Calling .SetConfigRaw
I0604 21:43:33.489024   22553 main.go:141] libmachine: () Calling .GetMachineName
I0604 21:43:33.489181   22553 main.go:141] libmachine: (functional-882970) Calling .DriverName
I0604 21:43:33.489351   22553 ssh_runner.go:195] Run: systemctl --version
I0604 21:43:33.489378   22553 main.go:141] libmachine: (functional-882970) Calling .GetSSHHostname
I0604 21:43:33.492580   22553 main.go:141] libmachine: (functional-882970) DBG | domain functional-882970 has defined MAC address 52:54:00:94:32:48 in network mk-functional-882970
I0604 21:43:33.492929   22553 main.go:141] libmachine: (functional-882970) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:32:48", ip: ""} in network mk-functional-882970: {Iface:virbr1 ExpiryTime:2024-06-04 22:40:09 +0000 UTC Type:0 Mac:52:54:00:94:32:48 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:functional-882970 Clientid:01:52:54:00:94:32:48}
I0604 21:43:33.492960   22553 main.go:141] libmachine: (functional-882970) DBG | domain functional-882970 has defined IP address 192.168.39.126 and MAC address 52:54:00:94:32:48 in network mk-functional-882970
I0604 21:43:33.493287   22553 main.go:141] libmachine: (functional-882970) Calling .GetSSHPort
I0604 21:43:33.493425   22553 main.go:141] libmachine: (functional-882970) Calling .GetSSHKeyPath
I0604 21:43:33.493580   22553 main.go:141] libmachine: (functional-882970) Calling .GetSSHUsername
I0604 21:43:33.493717   22553 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19024-5817/.minikube/machines/functional-882970/id_rsa Username:docker}
I0604 21:43:33.587049   22553 ssh_runner.go:195] Run: sudo crictl images --output json
I0604 21:43:33.676411   22553 main.go:141] libmachine: Making call to close driver server
I0604 21:43:33.676428   22553 main.go:141] libmachine: (functional-882970) Calling .Close
I0604 21:43:33.676722   22553 main.go:141] libmachine: Successfully made call to close driver server
I0604 21:43:33.676738   22553 main.go:141] libmachine: Making call to close connection to plugin binary
I0604 21:43:33.676746   22553 main.go:141] libmachine: Making call to close driver server
I0604 21:43:33.676771   22553 main.go:141] libmachine: (functional-882970) Calling .Close
I0604 21:43:33.676991   22553 main.go:141] libmachine: Successfully made call to close driver server
I0604 21:43:33.677005   22553 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-882970 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/kube-scheduler              | v1.30.1            | sha256:a52dc9 | 19.3MB |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| docker.io/kindest/kindnetd                  | v20240202-8f1494ea | sha256:4950bb | 27.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| docker.io/library/nginx                     | latest             | sha256:4f67c8 | 71MB   |
| registry.k8s.io/kube-apiserver              | v1.30.1            | sha256:91be94 | 32.8MB |
| registry.k8s.io/kube-proxy                  | v1.30.1            | sha256:747097 | 29MB   |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| docker.io/library/minikube-local-cache-test | functional-882970  | sha256:113877 | 992B   |
| gcr.io/google-containers/addon-resizer      | functional-882970  | sha256:ffd4cf | 10.8MB |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:cbb01a | 18.2MB |
| registry.k8s.io/etcd                        | 3.5.12-0           | sha256:3861cf | 57.2MB |
| registry.k8s.io/kube-controller-manager     | v1.30.1            | sha256:25a138 | 31.1MB |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-882970 image ls --format table --alsologtostderr:
I0604 21:43:33.731626   22600 out.go:291] Setting OutFile to fd 1 ...
I0604 21:43:33.731714   22600 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0604 21:43:33.731725   22600 out.go:304] Setting ErrFile to fd 2...
I0604 21:43:33.731729   22600 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0604 21:43:33.732430   22600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19024-5817/.minikube/bin
I0604 21:43:33.733690   22600 config.go:182] Loaded profile config "functional-882970": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
I0604 21:43:33.733824   22600 config.go:182] Loaded profile config "functional-882970": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
I0604 21:43:33.734179   22600 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0604 21:43:33.734224   22600 main.go:141] libmachine: Launching plugin server for driver kvm2
I0604 21:43:33.748615   22600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44547
I0604 21:43:33.749093   22600 main.go:141] libmachine: () Calling .GetVersion
I0604 21:43:33.749587   22600 main.go:141] libmachine: Using API Version  1
I0604 21:43:33.749611   22600 main.go:141] libmachine: () Calling .SetConfigRaw
I0604 21:43:33.750077   22600 main.go:141] libmachine: () Calling .GetMachineName
I0604 21:43:33.750284   22600 main.go:141] libmachine: (functional-882970) Calling .GetState
I0604 21:43:33.751909   22600 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0604 21:43:33.751955   22600 main.go:141] libmachine: Launching plugin server for driver kvm2
I0604 21:43:33.765594   22600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39871
I0604 21:43:33.766007   22600 main.go:141] libmachine: () Calling .GetVersion
I0604 21:43:33.766452   22600 main.go:141] libmachine: Using API Version  1
I0604 21:43:33.766477   22600 main.go:141] libmachine: () Calling .SetConfigRaw
I0604 21:43:33.766783   22600 main.go:141] libmachine: () Calling .GetMachineName
I0604 21:43:33.766945   22600 main.go:141] libmachine: (functional-882970) Calling .DriverName
I0604 21:43:33.767156   22600 ssh_runner.go:195] Run: systemctl --version
I0604 21:43:33.767309   22600 main.go:141] libmachine: (functional-882970) Calling .GetSSHHostname
I0604 21:43:33.770285   22600 main.go:141] libmachine: (functional-882970) DBG | domain functional-882970 has defined MAC address 52:54:00:94:32:48 in network mk-functional-882970
I0604 21:43:33.770613   22600 main.go:141] libmachine: (functional-882970) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:32:48", ip: ""} in network mk-functional-882970: {Iface:virbr1 ExpiryTime:2024-06-04 22:40:09 +0000 UTC Type:0 Mac:52:54:00:94:32:48 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:functional-882970 Clientid:01:52:54:00:94:32:48}
I0604 21:43:33.770644   22600 main.go:141] libmachine: (functional-882970) DBG | domain functional-882970 has defined IP address 192.168.39.126 and MAC address 52:54:00:94:32:48 in network mk-functional-882970
I0604 21:43:33.770761   22600 main.go:141] libmachine: (functional-882970) Calling .GetSSHPort
I0604 21:43:33.770902   22600 main.go:141] libmachine: (functional-882970) Calling .GetSSHKeyPath
I0604 21:43:33.771097   22600 main.go:141] libmachine: (functional-882970) Calling .GetSSHUsername
I0604 21:43:33.771237   22600 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19024-5817/.minikube/machines/functional-882970/id_rsa Username:docker}
I0604 21:43:33.859420   22600 ssh_runner.go:195] Run: sudo crictl images --output json
I0604 21:43:33.908972   22600 main.go:141] libmachine: Making call to close driver server
I0604 21:43:33.908990   22600 main.go:141] libmachine: (functional-882970) Calling .Close
I0604 21:43:33.909279   22600 main.go:141] libmachine: Successfully made call to close driver server
I0604 21:43:33.909300   22600 main.go:141] libmachine: Making call to close connection to plugin binary
I0604 21:43:33.909307   22600 main.go:141] libmachine: Making call to close driver server
I0604 21:43:33.909313   22600 main.go:141] libmachine: (functional-882970) Calling .Close
I0604 21:43:33.909362   22600 main.go:141] libmachine: (functional-882970) DBG | Closing plugin on server side
I0604 21:43:33.909557   22600 main.go:141] libmachine: Successfully made call to close driver server
I0604 21:43:33.909567   22600 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-882970 image ls --format json --alsologtostderr:
[{"id":"sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","repoDigests":["registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.1"],"size":"19314468"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-882970"],"size":"10823156"},{"id":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"57236178"},{"id":"sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:0d4a30512
34387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.1"],"size":"32766679"},{"id":"sha256:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"27755257"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:4f67c83422ec747235357c04556616234e66fc3fa39cb4f40b2d4441ddd8f100","repoDigests":["docker.io/library/nginx@sha256:0f04e4f646a3f14bf31d8bc8d885b6c951fdcf42589d06845f64d18aec6a3c4d"],"repoTags":["docker.io/library/nginx:latest"],"size":"71004355"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[
"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.1"],"size":"31136343"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernet
esui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:1138773ffdb526cc77476392d979090fbe619278b04d3604343743d6c2deaf13","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-882970"],"size":"992"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"18182961"},{"id":"sha256:747097150317f99937cabea48
4cff90097a2dbd79e7eb348b71dc0af879883cd","repoDigests":["registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.1"],"size":"29020445"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-882970 image ls --format json --alsologtostderr:
I0604 21:43:33.729841   22599 out.go:291] Setting OutFile to fd 1 ...
I0604 21:43:33.729958   22599 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0604 21:43:33.729969   22599 out.go:304] Setting ErrFile to fd 2...
I0604 21:43:33.729975   22599 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0604 21:43:33.730164   22599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19024-5817/.minikube/bin
I0604 21:43:33.730729   22599 config.go:182] Loaded profile config "functional-882970": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
I0604 21:43:33.730843   22599 config.go:182] Loaded profile config "functional-882970": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
I0604 21:43:33.731216   22599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0604 21:43:33.731279   22599 main.go:141] libmachine: Launching plugin server for driver kvm2
I0604 21:43:33.746099   22599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37419
I0604 21:43:33.746759   22599 main.go:141] libmachine: () Calling .GetVersion
I0604 21:43:33.747366   22599 main.go:141] libmachine: Using API Version  1
I0604 21:43:33.747394   22599 main.go:141] libmachine: () Calling .SetConfigRaw
I0604 21:43:33.747725   22599 main.go:141] libmachine: () Calling .GetMachineName
I0604 21:43:33.747904   22599 main.go:141] libmachine: (functional-882970) Calling .GetState
I0604 21:43:33.750043   22599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0604 21:43:33.750103   22599 main.go:141] libmachine: Launching plugin server for driver kvm2
I0604 21:43:33.764713   22599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34545
I0604 21:43:33.765131   22599 main.go:141] libmachine: () Calling .GetVersion
I0604 21:43:33.765629   22599 main.go:141] libmachine: Using API Version  1
I0604 21:43:33.765649   22599 main.go:141] libmachine: () Calling .SetConfigRaw
I0604 21:43:33.766082   22599 main.go:141] libmachine: () Calling .GetMachineName
I0604 21:43:33.766356   22599 main.go:141] libmachine: (functional-882970) Calling .DriverName
I0604 21:43:33.766550   22599 ssh_runner.go:195] Run: systemctl --version
I0604 21:43:33.766584   22599 main.go:141] libmachine: (functional-882970) Calling .GetSSHHostname
I0604 21:43:33.769799   22599 main.go:141] libmachine: (functional-882970) DBG | domain functional-882970 has defined MAC address 52:54:00:94:32:48 in network mk-functional-882970
I0604 21:43:33.770067   22599 main.go:141] libmachine: (functional-882970) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:32:48", ip: ""} in network mk-functional-882970: {Iface:virbr1 ExpiryTime:2024-06-04 22:40:09 +0000 UTC Type:0 Mac:52:54:00:94:32:48 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:functional-882970 Clientid:01:52:54:00:94:32:48}
I0604 21:43:33.770097   22599 main.go:141] libmachine: (functional-882970) DBG | domain functional-882970 has defined IP address 192.168.39.126 and MAC address 52:54:00:94:32:48 in network mk-functional-882970
I0604 21:43:33.770228   22599 main.go:141] libmachine: (functional-882970) Calling .GetSSHPort
I0604 21:43:33.770398   22599 main.go:141] libmachine: (functional-882970) Calling .GetSSHKeyPath
I0604 21:43:33.770543   22599 main.go:141] libmachine: (functional-882970) Calling .GetSSHUsername
I0604 21:43:33.770780   22599 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19024-5817/.minikube/machines/functional-882970/id_rsa Username:docker}
I0604 21:43:33.871431   22599 ssh_runner.go:195] Run: sudo crictl images --output json
I0604 21:43:33.948688   22599 main.go:141] libmachine: Making call to close driver server
I0604 21:43:33.948713   22599 main.go:141] libmachine: (functional-882970) Calling .Close
I0604 21:43:33.949024   22599 main.go:141] libmachine: (functional-882970) DBG | Closing plugin on server side
I0604 21:43:33.949058   22599 main.go:141] libmachine: Successfully made call to close driver server
I0604 21:43:33.949072   22599 main.go:141] libmachine: Making call to close connection to plugin binary
I0604 21:43:33.949089   22599 main.go:141] libmachine: Making call to close driver server
I0604 21:43:33.949103   22599 main.go:141] libmachine: (functional-882970) Calling .Close
I0604 21:43:33.949303   22599 main.go:141] libmachine: Successfully made call to close driver server
I0604 21:43:33.949314   22599 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-882970 image ls --format yaml --alsologtostderr:
- id: sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "18182961"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.1
size: "32766679"
- id: sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.1
size: "31136343"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.1
size: "19314468"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "27755257"
- id: sha256:1138773ffdb526cc77476392d979090fbe619278b04d3604343743d6c2deaf13
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-882970
size: "992"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-882970
size: "10823156"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:4f67c83422ec747235357c04556616234e66fc3fa39cb4f40b2d4441ddd8f100
repoDigests:
- docker.io/library/nginx@sha256:0f04e4f646a3f14bf31d8bc8d885b6c951fdcf42589d06845f64d18aec6a3c4d
repoTags:
- docker.io/library/nginx:latest
size: "71004355"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "57236178"
- id: sha256:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd
repoDigests:
- registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c
repoTags:
- registry.k8s.io/kube-proxy:v1.30.1
size: "29020445"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-882970 image ls --format yaml --alsologtostderr:
I0604 21:43:33.455619   22554 out.go:291] Setting OutFile to fd 1 ...
I0604 21:43:33.455875   22554 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0604 21:43:33.455885   22554 out.go:304] Setting ErrFile to fd 2...
I0604 21:43:33.455889   22554 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0604 21:43:33.456092   22554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19024-5817/.minikube/bin
I0604 21:43:33.456645   22554 config.go:182] Loaded profile config "functional-882970": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
I0604 21:43:33.456740   22554 config.go:182] Loaded profile config "functional-882970": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
I0604 21:43:33.457163   22554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0604 21:43:33.457208   22554 main.go:141] libmachine: Launching plugin server for driver kvm2
I0604 21:43:33.472205   22554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44089
I0604 21:43:33.472653   22554 main.go:141] libmachine: () Calling .GetVersion
I0604 21:43:33.473180   22554 main.go:141] libmachine: Using API Version  1
I0604 21:43:33.473200   22554 main.go:141] libmachine: () Calling .SetConfigRaw
I0604 21:43:33.473523   22554 main.go:141] libmachine: () Calling .GetMachineName
I0604 21:43:33.473757   22554 main.go:141] libmachine: (functional-882970) Calling .GetState
I0604 21:43:33.475924   22554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0604 21:43:33.475970   22554 main.go:141] libmachine: Launching plugin server for driver kvm2
I0604 21:43:33.489924   22554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41239
I0604 21:43:33.490277   22554 main.go:141] libmachine: () Calling .GetVersion
I0604 21:43:33.490874   22554 main.go:141] libmachine: Using API Version  1
I0604 21:43:33.490896   22554 main.go:141] libmachine: () Calling .SetConfigRaw
I0604 21:43:33.491149   22554 main.go:141] libmachine: () Calling .GetMachineName
I0604 21:43:33.491364   22554 main.go:141] libmachine: (functional-882970) Calling .DriverName
I0604 21:43:33.491560   22554 ssh_runner.go:195] Run: systemctl --version
I0604 21:43:33.491579   22554 main.go:141] libmachine: (functional-882970) Calling .GetSSHHostname
I0604 21:43:33.494171   22554 main.go:141] libmachine: (functional-882970) DBG | domain functional-882970 has defined MAC address 52:54:00:94:32:48 in network mk-functional-882970
I0604 21:43:33.494433   22554 main.go:141] libmachine: (functional-882970) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:32:48", ip: ""} in network mk-functional-882970: {Iface:virbr1 ExpiryTime:2024-06-04 22:40:09 +0000 UTC Type:0 Mac:52:54:00:94:32:48 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:functional-882970 Clientid:01:52:54:00:94:32:48}
I0604 21:43:33.494451   22554 main.go:141] libmachine: (functional-882970) DBG | domain functional-882970 has defined IP address 192.168.39.126 and MAC address 52:54:00:94:32:48 in network mk-functional-882970
I0604 21:43:33.494666   22554 main.go:141] libmachine: (functional-882970) Calling .GetSSHPort
I0604 21:43:33.494845   22554 main.go:141] libmachine: (functional-882970) Calling .GetSSHKeyPath
I0604 21:43:33.494954   22554 main.go:141] libmachine: (functional-882970) Calling .GetSSHUsername
I0604 21:43:33.495045   22554 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19024-5817/.minikube/machines/functional-882970/id_rsa Username:docker}
I0604 21:43:33.582590   22554 ssh_runner.go:195] Run: sudo crictl images --output json
I0604 21:43:33.678852   22554 main.go:141] libmachine: Making call to close driver server
I0604 21:43:33.678865   22554 main.go:141] libmachine: (functional-882970) Calling .Close
I0604 21:43:33.679095   22554 main.go:141] libmachine: Successfully made call to close driver server
I0604 21:43:33.679112   22554 main.go:141] libmachine: Making call to close connection to plugin binary
I0604 21:43:33.679120   22554 main.go:141] libmachine: Making call to close driver server
I0604 21:43:33.679130   22554 main.go:141] libmachine: (functional-882970) Calling .Close
I0604 21:43:33.679353   22554 main.go:141] libmachine: Successfully made call to close driver server
I0604 21:43:33.679375   22554 main.go:141] libmachine: Making call to close connection to plugin binary
I0604 21:43:33.679352   22554 main.go:141] libmachine: (functional-882970) DBG | Closing plugin on server side
W0604 21:43:33.679535   22554 root.go:91] failed to log command end to audit: failed to find a log row with id equals to ca9a9e07-4825-4845-b20d-b31d1367edf0
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882970 ssh pgrep buildkitd: exit status 1 (189.286073ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 image build -t localhost/my-image:functional-882970 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-882970 image build -t localhost/my-image:functional-882970 testdata/build --alsologtostderr: (5.894822357s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-882970 image build -t localhost/my-image:functional-882970 testdata/build --alsologtostderr:
I0604 21:43:34.145300   22675 out.go:291] Setting OutFile to fd 1 ...
I0604 21:43:34.145556   22675 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0604 21:43:34.145566   22675 out.go:304] Setting ErrFile to fd 2...
I0604 21:43:34.145570   22675 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0604 21:43:34.145742   22675 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19024-5817/.minikube/bin
I0604 21:43:34.146289   22675 config.go:182] Loaded profile config "functional-882970": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
I0604 21:43:34.146753   22675 config.go:182] Loaded profile config "functional-882970": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
I0604 21:43:34.147078   22675 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0604 21:43:34.147124   22675 main.go:141] libmachine: Launching plugin server for driver kvm2
I0604 21:43:34.161611   22675 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37233
I0604 21:43:34.162109   22675 main.go:141] libmachine: () Calling .GetVersion
I0604 21:43:34.162611   22675 main.go:141] libmachine: Using API Version  1
I0604 21:43:34.162626   22675 main.go:141] libmachine: () Calling .SetConfigRaw
I0604 21:43:34.162958   22675 main.go:141] libmachine: () Calling .GetMachineName
I0604 21:43:34.163155   22675 main.go:141] libmachine: (functional-882970) Calling .GetState
I0604 21:43:34.164872   22675 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0604 21:43:34.164906   22675 main.go:141] libmachine: Launching plugin server for driver kvm2
I0604 21:43:34.179087   22675 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43585
I0604 21:43:34.179448   22675 main.go:141] libmachine: () Calling .GetVersion
I0604 21:43:34.179864   22675 main.go:141] libmachine: Using API Version  1
I0604 21:43:34.179881   22675 main.go:141] libmachine: () Calling .SetConfigRaw
I0604 21:43:34.180274   22675 main.go:141] libmachine: () Calling .GetMachineName
I0604 21:43:34.180493   22675 main.go:141] libmachine: (functional-882970) Calling .DriverName
I0604 21:43:34.180701   22675 ssh_runner.go:195] Run: systemctl --version
I0604 21:43:34.180723   22675 main.go:141] libmachine: (functional-882970) Calling .GetSSHHostname
I0604 21:43:34.183298   22675 main.go:141] libmachine: (functional-882970) DBG | domain functional-882970 has defined MAC address 52:54:00:94:32:48 in network mk-functional-882970
I0604 21:43:34.183639   22675 main.go:141] libmachine: (functional-882970) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:32:48", ip: ""} in network mk-functional-882970: {Iface:virbr1 ExpiryTime:2024-06-04 22:40:09 +0000 UTC Type:0 Mac:52:54:00:94:32:48 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:functional-882970 Clientid:01:52:54:00:94:32:48}
I0604 21:43:34.183665   22675 main.go:141] libmachine: (functional-882970) DBG | domain functional-882970 has defined IP address 192.168.39.126 and MAC address 52:54:00:94:32:48 in network mk-functional-882970
I0604 21:43:34.183906   22675 main.go:141] libmachine: (functional-882970) Calling .GetSSHPort
I0604 21:43:34.184077   22675 main.go:141] libmachine: (functional-882970) Calling .GetSSHKeyPath
I0604 21:43:34.184226   22675 main.go:141] libmachine: (functional-882970) Calling .GetSSHUsername
I0604 21:43:34.184399   22675 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19024-5817/.minikube/machines/functional-882970/id_rsa Username:docker}
I0604 21:43:34.263227   22675 build_images.go:161] Building image from path: /tmp/build.2723487110.tar
I0604 21:43:34.263306   22675 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0604 21:43:34.274345   22675 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2723487110.tar
I0604 21:43:34.279445   22675 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2723487110.tar: stat -c "%s %y" /var/lib/minikube/build/build.2723487110.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2723487110.tar': No such file or directory
I0604 21:43:34.279480   22675 ssh_runner.go:362] scp /tmp/build.2723487110.tar --> /var/lib/minikube/build/build.2723487110.tar (3072 bytes)
I0604 21:43:34.305663   22675 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2723487110
I0604 21:43:34.319060   22675 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2723487110 -xf /var/lib/minikube/build/build.2723487110.tar
I0604 21:43:34.330991   22675 containerd.go:394] Building image: /var/lib/minikube/build/build.2723487110
I0604 21:43:34.331064   22675 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2723487110 --local dockerfile=/var/lib/minikube/build/build.2723487110 --output type=image,name=localhost/my-image:functional-882970
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 2.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 DONE 0.2s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 1.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.3s done
#8 exporting manifest sha256:24d985beca66b6f3daa4029d02f586bbe49f3edfb872dfd1a5617693b914ebba 0.0s done
#8 exporting config sha256:640a99fb329151ef679ad501d43d60046b6c3cdcfc7898842106777a91d5b849 0.0s done
#8 naming to localhost/my-image:functional-882970 done
#8 DONE 0.4s
I0604 21:43:39.949570   22675 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2723487110 --local dockerfile=/var/lib/minikube/build/build.2723487110 --output type=image,name=localhost/my-image:functional-882970: (5.618477813s)
I0604 21:43:39.949635   22675 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2723487110
I0604 21:43:39.969515   22675 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2723487110.tar
I0604 21:43:39.993513   22675 build_images.go:217] Built localhost/my-image:functional-882970 from /tmp/build.2723487110.tar
I0604 21:43:39.993559   22675 build_images.go:133] succeeded building to: functional-882970
I0604 21:43:39.993566   22675 build_images.go:134] failed building to: 
I0604 21:43:39.993596   22675 main.go:141] libmachine: Making call to close driver server
I0604 21:43:39.993608   22675 main.go:141] libmachine: (functional-882970) Calling .Close
I0604 21:43:39.993934   22675 main.go:141] libmachine: (functional-882970) DBG | Closing plugin on server side
I0604 21:43:39.993983   22675 main.go:141] libmachine: Successfully made call to close driver server
I0604 21:43:39.993995   22675 main.go:141] libmachine: Making call to close connection to plugin binary
I0604 21:43:39.994010   22675 main.go:141] libmachine: Making call to close driver server
I0604 21:43:39.994016   22675 main.go:141] libmachine: (functional-882970) Calling .Close
I0604 21:43:39.994231   22675 main.go:141] libmachine: Successfully made call to close driver server
I0604 21:43:39.994256   22675 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.006036277s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-882970
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.03s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 image load --daemon gcr.io/google-containers/addon-resizer:functional-882970 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-882970 image load --daemon gcr.io/google-containers/addon-resizer:functional-882970 --alsologtostderr: (4.293921316s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 image load --daemon gcr.io/google-containers/addon-resizer:functional-882970 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-882970 image load --daemon gcr.io/google-containers/addon-resizer:functional-882970 --alsologtostderr: (2.937423595s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.827664781s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-882970
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 image load --daemon gcr.io/google-containers/addon-resizer:functional-882970 --alsologtostderr
2024/06/04 21:43:23 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-882970 image load --daemon gcr.io/google-containers/addon-resizer:functional-882970 --alsologtostderr: (4.905561498s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 image save gcr.io/google-containers/addon-resizer:functional-882970 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-882970 image save gcr.io/google-containers/addon-resizer:functional-882970 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.041862184s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 image rm gcr.io/google-containers/addon-resizer:functional-882970 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
E0604 21:43:31.296549   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt: no such file or directory
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-882970 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.41208482s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-882970
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-882970 image save --daemon gcr.io/google-containers/addon-resizer:functional-882970 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-882970 image save --daemon gcr.io/google-containers/addon-resizer:functional-882970 --alsologtostderr: (1.067753312s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-882970
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.10s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-882970
--- PASS: TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-882970
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-882970
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (282.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-207529 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0604 21:45:47.452173   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt: no such file or directory
E0604 21:46:15.138746   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt: no such file or directory
E0604 21:47:57.172720   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/functional-882970/client.crt: no such file or directory
E0604 21:47:57.177990   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/functional-882970/client.crt: no such file or directory
E0604 21:47:57.188269   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/functional-882970/client.crt: no such file or directory
E0604 21:47:57.208595   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/functional-882970/client.crt: no such file or directory
E0604 21:47:57.248853   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/functional-882970/client.crt: no such file or directory
E0604 21:47:57.329177   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/functional-882970/client.crt: no such file or directory
E0604 21:47:57.489688   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/functional-882970/client.crt: no such file or directory
E0604 21:47:57.810295   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/functional-882970/client.crt: no such file or directory
E0604 21:47:58.451239   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/functional-882970/client.crt: no such file or directory
E0604 21:47:59.731754   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/functional-882970/client.crt: no such file or directory
E0604 21:48:02.293419   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/functional-882970/client.crt: no such file or directory
E0604 21:48:07.414358   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/functional-882970/client.crt: no such file or directory
E0604 21:48:17.654625   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/functional-882970/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-207529 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (4m41.453603843s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (282.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-207529 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-207529 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-207529 -- rollout status deployment/busybox: (4.606742535s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-207529 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-207529 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-207529 -- exec busybox-fc5497c4f-7ddb4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-207529 -- exec busybox-fc5497c4f-p97dn -- nslookup kubernetes.io
E0604 21:48:38.135194   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/functional-882970/client.crt: no such file or directory
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-207529 -- exec busybox-fc5497c4f-tg27j -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-207529 -- exec busybox-fc5497c4f-7ddb4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-207529 -- exec busybox-fc5497c4f-p97dn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-207529 -- exec busybox-fc5497c4f-tg27j -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-207529 -- exec busybox-fc5497c4f-7ddb4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-207529 -- exec busybox-fc5497c4f-p97dn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-207529 -- exec busybox-fc5497c4f-tg27j -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-207529 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-207529 -- exec busybox-fc5497c4f-7ddb4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-207529 -- exec busybox-fc5497c4f-7ddb4 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-207529 -- exec busybox-fc5497c4f-p97dn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-207529 -- exec busybox-fc5497c4f-p97dn -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-207529 -- exec busybox-fc5497c4f-tg27j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-207529 -- exec busybox-fc5497c4f-tg27j -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (45.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-207529 -v=7 --alsologtostderr
E0604 21:49:19.095969   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/functional-882970/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-207529 -v=7 --alsologtostderr: (44.289988868s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (45.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-207529 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 cp testdata/cp-test.txt ha-207529:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 ssh -n ha-207529 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 cp ha-207529:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3390792120/001/cp-test_ha-207529.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 ssh -n ha-207529 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 cp ha-207529:/home/docker/cp-test.txt ha-207529-m02:/home/docker/cp-test_ha-207529_ha-207529-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 ssh -n ha-207529 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 ssh -n ha-207529-m02 "sudo cat /home/docker/cp-test_ha-207529_ha-207529-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 cp ha-207529:/home/docker/cp-test.txt ha-207529-m03:/home/docker/cp-test_ha-207529_ha-207529-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 ssh -n ha-207529 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 ssh -n ha-207529-m03 "sudo cat /home/docker/cp-test_ha-207529_ha-207529-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 cp ha-207529:/home/docker/cp-test.txt ha-207529-m04:/home/docker/cp-test_ha-207529_ha-207529-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 ssh -n ha-207529 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 ssh -n ha-207529-m04 "sudo cat /home/docker/cp-test_ha-207529_ha-207529-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 cp testdata/cp-test.txt ha-207529-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 ssh -n ha-207529-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 cp ha-207529-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3390792120/001/cp-test_ha-207529-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 ssh -n ha-207529-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 cp ha-207529-m02:/home/docker/cp-test.txt ha-207529:/home/docker/cp-test_ha-207529-m02_ha-207529.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 ssh -n ha-207529-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 ssh -n ha-207529 "sudo cat /home/docker/cp-test_ha-207529-m02_ha-207529.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 cp ha-207529-m02:/home/docker/cp-test.txt ha-207529-m03:/home/docker/cp-test_ha-207529-m02_ha-207529-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 ssh -n ha-207529-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 ssh -n ha-207529-m03 "sudo cat /home/docker/cp-test_ha-207529-m02_ha-207529-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 cp ha-207529-m02:/home/docker/cp-test.txt ha-207529-m04:/home/docker/cp-test_ha-207529-m02_ha-207529-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 ssh -n ha-207529-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 ssh -n ha-207529-m04 "sudo cat /home/docker/cp-test_ha-207529-m02_ha-207529-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 cp testdata/cp-test.txt ha-207529-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 ssh -n ha-207529-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 cp ha-207529-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3390792120/001/cp-test_ha-207529-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 ssh -n ha-207529-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 cp ha-207529-m03:/home/docker/cp-test.txt ha-207529:/home/docker/cp-test_ha-207529-m03_ha-207529.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 ssh -n ha-207529-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 ssh -n ha-207529 "sudo cat /home/docker/cp-test_ha-207529-m03_ha-207529.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 cp ha-207529-m03:/home/docker/cp-test.txt ha-207529-m02:/home/docker/cp-test_ha-207529-m03_ha-207529-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 ssh -n ha-207529-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 ssh -n ha-207529-m02 "sudo cat /home/docker/cp-test_ha-207529-m03_ha-207529-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 cp ha-207529-m03:/home/docker/cp-test.txt ha-207529-m04:/home/docker/cp-test_ha-207529-m03_ha-207529-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 ssh -n ha-207529-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 ssh -n ha-207529-m04 "sudo cat /home/docker/cp-test_ha-207529-m03_ha-207529-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 cp testdata/cp-test.txt ha-207529-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 ssh -n ha-207529-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 cp ha-207529-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3390792120/001/cp-test_ha-207529-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 ssh -n ha-207529-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 cp ha-207529-m04:/home/docker/cp-test.txt ha-207529:/home/docker/cp-test_ha-207529-m04_ha-207529.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 ssh -n ha-207529-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 ssh -n ha-207529 "sudo cat /home/docker/cp-test_ha-207529-m04_ha-207529.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 cp ha-207529-m04:/home/docker/cp-test.txt ha-207529-m02:/home/docker/cp-test_ha-207529-m04_ha-207529-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 ssh -n ha-207529-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 ssh -n ha-207529-m02 "sudo cat /home/docker/cp-test_ha-207529-m04_ha-207529-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 cp ha-207529-m04:/home/docker/cp-test.txt ha-207529-m03:/home/docker/cp-test_ha-207529-m04_ha-207529-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 ssh -n ha-207529-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 ssh -n ha-207529-m03 "sudo cat /home/docker/cp-test_ha-207529-m04_ha-207529-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (92.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 node stop m02 -v=7 --alsologtostderr
E0604 21:50:41.016835   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/functional-882970/client.crt: no such file or directory
E0604 21:50:47.452643   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-207529 node stop m02 -v=7 --alsologtostderr: (1m31.641010841s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-207529 status -v=7 --alsologtostderr: exit status 7 (618.986663ms)

                                                
                                                
-- stdout --
	ha-207529
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-207529-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-207529-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-207529-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 21:51:10.450878   27583 out.go:291] Setting OutFile to fd 1 ...
	I0604 21:51:10.451134   27583 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 21:51:10.451147   27583 out.go:304] Setting ErrFile to fd 2...
	I0604 21:51:10.451154   27583 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 21:51:10.451804   27583 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19024-5817/.minikube/bin
	I0604 21:51:10.452249   27583 out.go:298] Setting JSON to false
	I0604 21:51:10.452275   27583 mustload.go:65] Loading cluster: ha-207529
	I0604 21:51:10.452380   27583 notify.go:220] Checking for updates...
	I0604 21:51:10.452809   27583 config.go:182] Loaded profile config "ha-207529": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
	I0604 21:51:10.452831   27583 status.go:255] checking status of ha-207529 ...
	I0604 21:51:10.453269   27583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:51:10.453338   27583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:51:10.473530   27583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45109
	I0604 21:51:10.473960   27583 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:51:10.474496   27583 main.go:141] libmachine: Using API Version  1
	I0604 21:51:10.474512   27583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:51:10.474898   27583 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:51:10.475102   27583 main.go:141] libmachine: (ha-207529) Calling .GetState
	I0604 21:51:10.476946   27583 status.go:330] ha-207529 host status = "Running" (err=<nil>)
	I0604 21:51:10.476965   27583 host.go:66] Checking if "ha-207529" exists ...
	I0604 21:51:10.477414   27583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:51:10.477463   27583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:51:10.492009   27583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33897
	I0604 21:51:10.492413   27583 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:51:10.492889   27583 main.go:141] libmachine: Using API Version  1
	I0604 21:51:10.492903   27583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:51:10.493184   27583 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:51:10.493348   27583 main.go:141] libmachine: (ha-207529) Calling .GetIP
	I0604 21:51:10.496404   27583 main.go:141] libmachine: (ha-207529) DBG | domain ha-207529 has defined MAC address 52:54:00:0d:69:f3 in network mk-ha-207529
	I0604 21:51:10.496938   27583 main.go:141] libmachine: (ha-207529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:69:f3", ip: ""} in network mk-ha-207529: {Iface:virbr1 ExpiryTime:2024-06-04 22:44:04 +0000 UTC Type:0 Mac:52:54:00:0d:69:f3 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-207529 Clientid:01:52:54:00:0d:69:f3}
	I0604 21:51:10.496971   27583 main.go:141] libmachine: (ha-207529) DBG | domain ha-207529 has defined IP address 192.168.39.43 and MAC address 52:54:00:0d:69:f3 in network mk-ha-207529
	I0604 21:51:10.497132   27583 host.go:66] Checking if "ha-207529" exists ...
	I0604 21:51:10.497396   27583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:51:10.497435   27583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:51:10.511180   27583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43351
	I0604 21:51:10.511539   27583 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:51:10.511952   27583 main.go:141] libmachine: Using API Version  1
	I0604 21:51:10.511978   27583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:51:10.512251   27583 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:51:10.512429   27583 main.go:141] libmachine: (ha-207529) Calling .DriverName
	I0604 21:51:10.512654   27583 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 21:51:10.512676   27583 main.go:141] libmachine: (ha-207529) Calling .GetSSHHostname
	I0604 21:51:10.515489   27583 main.go:141] libmachine: (ha-207529) DBG | domain ha-207529 has defined MAC address 52:54:00:0d:69:f3 in network mk-ha-207529
	I0604 21:51:10.515889   27583 main.go:141] libmachine: (ha-207529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:69:f3", ip: ""} in network mk-ha-207529: {Iface:virbr1 ExpiryTime:2024-06-04 22:44:04 +0000 UTC Type:0 Mac:52:54:00:0d:69:f3 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-207529 Clientid:01:52:54:00:0d:69:f3}
	I0604 21:51:10.515925   27583 main.go:141] libmachine: (ha-207529) DBG | domain ha-207529 has defined IP address 192.168.39.43 and MAC address 52:54:00:0d:69:f3 in network mk-ha-207529
	I0604 21:51:10.516003   27583 main.go:141] libmachine: (ha-207529) Calling .GetSSHPort
	I0604 21:51:10.516166   27583 main.go:141] libmachine: (ha-207529) Calling .GetSSHKeyPath
	I0604 21:51:10.516298   27583 main.go:141] libmachine: (ha-207529) Calling .GetSSHUsername
	I0604 21:51:10.516392   27583 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19024-5817/.minikube/machines/ha-207529/id_rsa Username:docker}
	I0604 21:51:10.601397   27583 ssh_runner.go:195] Run: systemctl --version
	I0604 21:51:10.608547   27583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0604 21:51:10.624565   27583 kubeconfig.go:125] found "ha-207529" server: "https://192.168.39.254:8443"
	I0604 21:51:10.624594   27583 api_server.go:166] Checking apiserver status ...
	I0604 21:51:10.624639   27583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0604 21:51:10.639138   27583 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1178/cgroup
	W0604 21:51:10.648468   27583 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1178/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0604 21:51:10.648530   27583 ssh_runner.go:195] Run: ls
	I0604 21:51:10.652996   27583 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0604 21:51:10.657241   27583 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0604 21:51:10.657259   27583 status.go:422] ha-207529 apiserver status = Running (err=<nil>)
	I0604 21:51:10.657267   27583 status.go:257] ha-207529 status: &{Name:ha-207529 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0604 21:51:10.657282   27583 status.go:255] checking status of ha-207529-m02 ...
	I0604 21:51:10.657548   27583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:51:10.657582   27583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:51:10.671835   27583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36061
	I0604 21:51:10.672203   27583 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:51:10.672621   27583 main.go:141] libmachine: Using API Version  1
	I0604 21:51:10.672644   27583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:51:10.673010   27583 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:51:10.673220   27583 main.go:141] libmachine: (ha-207529-m02) Calling .GetState
	I0604 21:51:10.674861   27583 status.go:330] ha-207529-m02 host status = "Stopped" (err=<nil>)
	I0604 21:51:10.674876   27583 status.go:343] host is not running, skipping remaining checks
	I0604 21:51:10.674883   27583 status.go:257] ha-207529-m02 status: &{Name:ha-207529-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0604 21:51:10.674900   27583 status.go:255] checking status of ha-207529-m03 ...
	I0604 21:51:10.675160   27583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:51:10.675193   27583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:51:10.690561   27583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38055
	I0604 21:51:10.690930   27583 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:51:10.691382   27583 main.go:141] libmachine: Using API Version  1
	I0604 21:51:10.691403   27583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:51:10.691698   27583 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:51:10.691881   27583 main.go:141] libmachine: (ha-207529-m03) Calling .GetState
	I0604 21:51:10.693534   27583 status.go:330] ha-207529-m03 host status = "Running" (err=<nil>)
	I0604 21:51:10.693550   27583 host.go:66] Checking if "ha-207529-m03" exists ...
	I0604 21:51:10.693834   27583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:51:10.693875   27583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:51:10.708165   27583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46507
	I0604 21:51:10.708543   27583 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:51:10.708965   27583 main.go:141] libmachine: Using API Version  1
	I0604 21:51:10.708986   27583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:51:10.709278   27583 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:51:10.709478   27583 main.go:141] libmachine: (ha-207529-m03) Calling .GetIP
	I0604 21:51:10.711998   27583 main.go:141] libmachine: (ha-207529-m03) DBG | domain ha-207529-m03 has defined MAC address 52:54:00:f5:50:b3 in network mk-ha-207529
	I0604 21:51:10.712351   27583 main.go:141] libmachine: (ha-207529-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:50:b3", ip: ""} in network mk-ha-207529: {Iface:virbr1 ExpiryTime:2024-06-04 22:47:38 +0000 UTC Type:0 Mac:52:54:00:f5:50:b3 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ha-207529-m03 Clientid:01:52:54:00:f5:50:b3}
	I0604 21:51:10.712381   27583 main.go:141] libmachine: (ha-207529-m03) DBG | domain ha-207529-m03 has defined IP address 192.168.39.20 and MAC address 52:54:00:f5:50:b3 in network mk-ha-207529
	I0604 21:51:10.712510   27583 host.go:66] Checking if "ha-207529-m03" exists ...
	I0604 21:51:10.712917   27583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:51:10.712958   27583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:51:10.728375   27583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36569
	I0604 21:51:10.728821   27583 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:51:10.729333   27583 main.go:141] libmachine: Using API Version  1
	I0604 21:51:10.729353   27583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:51:10.729663   27583 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:51:10.729872   27583 main.go:141] libmachine: (ha-207529-m03) Calling .DriverName
	I0604 21:51:10.730098   27583 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 21:51:10.730121   27583 main.go:141] libmachine: (ha-207529-m03) Calling .GetSSHHostname
	I0604 21:51:10.732509   27583 main.go:141] libmachine: (ha-207529-m03) DBG | domain ha-207529-m03 has defined MAC address 52:54:00:f5:50:b3 in network mk-ha-207529
	I0604 21:51:10.732988   27583 main.go:141] libmachine: (ha-207529-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:50:b3", ip: ""} in network mk-ha-207529: {Iface:virbr1 ExpiryTime:2024-06-04 22:47:38 +0000 UTC Type:0 Mac:52:54:00:f5:50:b3 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ha-207529-m03 Clientid:01:52:54:00:f5:50:b3}
	I0604 21:51:10.733009   27583 main.go:141] libmachine: (ha-207529-m03) DBG | domain ha-207529-m03 has defined IP address 192.168.39.20 and MAC address 52:54:00:f5:50:b3 in network mk-ha-207529
	I0604 21:51:10.733097   27583 main.go:141] libmachine: (ha-207529-m03) Calling .GetSSHPort
	I0604 21:51:10.733265   27583 main.go:141] libmachine: (ha-207529-m03) Calling .GetSSHKeyPath
	I0604 21:51:10.733415   27583 main.go:141] libmachine: (ha-207529-m03) Calling .GetSSHUsername
	I0604 21:51:10.733590   27583 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19024-5817/.minikube/machines/ha-207529-m03/id_rsa Username:docker}
	I0604 21:51:10.817600   27583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0604 21:51:10.836952   27583 kubeconfig.go:125] found "ha-207529" server: "https://192.168.39.254:8443"
	I0604 21:51:10.836980   27583 api_server.go:166] Checking apiserver status ...
	I0604 21:51:10.837012   27583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0604 21:51:10.852405   27583 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1266/cgroup
	W0604 21:51:10.862132   27583 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1266/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0604 21:51:10.862178   27583 ssh_runner.go:195] Run: ls
	I0604 21:51:10.866613   27583 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0604 21:51:10.871144   27583 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0604 21:51:10.871164   27583 status.go:422] ha-207529-m03 apiserver status = Running (err=<nil>)
	I0604 21:51:10.871172   27583 status.go:257] ha-207529-m03 status: &{Name:ha-207529-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0604 21:51:10.871186   27583 status.go:255] checking status of ha-207529-m04 ...
	I0604 21:51:10.871472   27583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:51:10.871502   27583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:51:10.885887   27583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37427
	I0604 21:51:10.886304   27583 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:51:10.886752   27583 main.go:141] libmachine: Using API Version  1
	I0604 21:51:10.886774   27583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:51:10.887044   27583 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:51:10.887230   27583 main.go:141] libmachine: (ha-207529-m04) Calling .GetState
	I0604 21:51:10.888909   27583 status.go:330] ha-207529-m04 host status = "Running" (err=<nil>)
	I0604 21:51:10.888924   27583 host.go:66] Checking if "ha-207529-m04" exists ...
	I0604 21:51:10.889191   27583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:51:10.889223   27583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:51:10.903697   27583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44211
	I0604 21:51:10.904072   27583 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:51:10.904474   27583 main.go:141] libmachine: Using API Version  1
	I0604 21:51:10.904493   27583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:51:10.904858   27583 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:51:10.905044   27583 main.go:141] libmachine: (ha-207529-m04) Calling .GetIP
	I0604 21:51:10.907758   27583 main.go:141] libmachine: (ha-207529-m04) DBG | domain ha-207529-m04 has defined MAC address 52:54:00:24:f7:4a in network mk-ha-207529
	I0604 21:51:10.908173   27583 main.go:141] libmachine: (ha-207529-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:f7:4a", ip: ""} in network mk-ha-207529: {Iface:virbr1 ExpiryTime:2024-06-04 22:48:55 +0000 UTC Type:0 Mac:52:54:00:24:f7:4a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-207529-m04 Clientid:01:52:54:00:24:f7:4a}
	I0604 21:51:10.908198   27583 main.go:141] libmachine: (ha-207529-m04) DBG | domain ha-207529-m04 has defined IP address 192.168.39.67 and MAC address 52:54:00:24:f7:4a in network mk-ha-207529
	I0604 21:51:10.908348   27583 host.go:66] Checking if "ha-207529-m04" exists ...
	I0604 21:51:10.908668   27583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 21:51:10.908715   27583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 21:51:10.923424   27583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42609
	I0604 21:51:10.923760   27583 main.go:141] libmachine: () Calling .GetVersion
	I0604 21:51:10.924226   27583 main.go:141] libmachine: Using API Version  1
	I0604 21:51:10.924250   27583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 21:51:10.924577   27583 main.go:141] libmachine: () Calling .GetMachineName
	I0604 21:51:10.924777   27583 main.go:141] libmachine: (ha-207529-m04) Calling .DriverName
	I0604 21:51:10.924969   27583 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 21:51:10.924995   27583 main.go:141] libmachine: (ha-207529-m04) Calling .GetSSHHostname
	I0604 21:51:10.927881   27583 main.go:141] libmachine: (ha-207529-m04) DBG | domain ha-207529-m04 has defined MAC address 52:54:00:24:f7:4a in network mk-ha-207529
	I0604 21:51:10.928306   27583 main.go:141] libmachine: (ha-207529-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:f7:4a", ip: ""} in network mk-ha-207529: {Iface:virbr1 ExpiryTime:2024-06-04 22:48:55 +0000 UTC Type:0 Mac:52:54:00:24:f7:4a Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-207529-m04 Clientid:01:52:54:00:24:f7:4a}
	I0604 21:51:10.928344   27583 main.go:141] libmachine: (ha-207529-m04) DBG | domain ha-207529-m04 has defined IP address 192.168.39.67 and MAC address 52:54:00:24:f7:4a in network mk-ha-207529
	I0604 21:51:10.928460   27583 main.go:141] libmachine: (ha-207529-m04) Calling .GetSSHPort
	I0604 21:51:10.928643   27583 main.go:141] libmachine: (ha-207529-m04) Calling .GetSSHKeyPath
	I0604 21:51:10.928793   27583 main.go:141] libmachine: (ha-207529-m04) Calling .GetSSHUsername
	I0604 21:51:10.928947   27583 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19024-5817/.minikube/machines/ha-207529-m04/id_rsa Username:docker}
	I0604 21:51:11.014370   27583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0604 21:51:11.031217   27583 status.go:257] ha-207529-m04 status: &{Name:ha-207529-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (92.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (36.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-207529 node start m02 -v=7 --alsologtostderr: (35.952910222s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (36.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (483.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-207529 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-207529 -v=7 --alsologtostderr
E0604 21:52:57.172690   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/functional-882970/client.crt: no such file or directory
E0604 21:53:24.857734   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/functional-882970/client.crt: no such file or directory
E0604 21:55:47.452479   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-207529 -v=7 --alsologtostderr: (4m36.859157278s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-207529 --wait=true -v=7 --alsologtostderr
E0604 21:57:10.499450   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt: no such file or directory
E0604 21:57:57.172349   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/functional-882970/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-207529 --wait=true -v=7 --alsologtostderr: (3m26.975722587s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-207529
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (483.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-207529 node delete m03 -v=7 --alsologtostderr: (6.91614894s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (274.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 stop -v=7 --alsologtostderr
E0604 22:00:47.452625   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt: no such file or directory
E0604 22:02:57.173328   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/functional-882970/client.crt: no such file or directory
E0604 22:04:20.218465   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/functional-882970/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-207529 stop -v=7 --alsologtostderr: (4m34.584161727s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-207529 status -v=7 --alsologtostderr: exit status 7 (98.237341ms)

                                                
                                                
-- stdout --
	ha-207529
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-207529-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-207529-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 22:04:35.317912   31531 out.go:291] Setting OutFile to fd 1 ...
	I0604 22:04:35.318030   31531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 22:04:35.318039   31531 out.go:304] Setting ErrFile to fd 2...
	I0604 22:04:35.318045   31531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 22:04:35.318216   31531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19024-5817/.minikube/bin
	I0604 22:04:35.318380   31531 out.go:298] Setting JSON to false
	I0604 22:04:35.318406   31531 mustload.go:65] Loading cluster: ha-207529
	I0604 22:04:35.318500   31531 notify.go:220] Checking for updates...
	I0604 22:04:35.318807   31531 config.go:182] Loaded profile config "ha-207529": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
	I0604 22:04:35.318824   31531 status.go:255] checking status of ha-207529 ...
	I0604 22:04:35.319170   31531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 22:04:35.319250   31531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 22:04:35.340724   31531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45705
	I0604 22:04:35.341133   31531 main.go:141] libmachine: () Calling .GetVersion
	I0604 22:04:35.341620   31531 main.go:141] libmachine: Using API Version  1
	I0604 22:04:35.341640   31531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 22:04:35.342009   31531 main.go:141] libmachine: () Calling .GetMachineName
	I0604 22:04:35.342232   31531 main.go:141] libmachine: (ha-207529) Calling .GetState
	I0604 22:04:35.343850   31531 status.go:330] ha-207529 host status = "Stopped" (err=<nil>)
	I0604 22:04:35.343865   31531 status.go:343] host is not running, skipping remaining checks
	I0604 22:04:35.343872   31531 status.go:257] ha-207529 status: &{Name:ha-207529 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0604 22:04:35.343896   31531 status.go:255] checking status of ha-207529-m02 ...
	I0604 22:04:35.344153   31531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 22:04:35.344187   31531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 22:04:35.358232   31531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35437
	I0604 22:04:35.358664   31531 main.go:141] libmachine: () Calling .GetVersion
	I0604 22:04:35.359101   31531 main.go:141] libmachine: Using API Version  1
	I0604 22:04:35.359120   31531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 22:04:35.359395   31531 main.go:141] libmachine: () Calling .GetMachineName
	I0604 22:04:35.359546   31531 main.go:141] libmachine: (ha-207529-m02) Calling .GetState
	I0604 22:04:35.360903   31531 status.go:330] ha-207529-m02 host status = "Stopped" (err=<nil>)
	I0604 22:04:35.360915   31531 status.go:343] host is not running, skipping remaining checks
	I0604 22:04:35.360920   31531 status.go:257] ha-207529-m02 status: &{Name:ha-207529-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0604 22:04:35.360942   31531 status.go:255] checking status of ha-207529-m04 ...
	I0604 22:04:35.361212   31531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 22:04:35.361250   31531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 22:04:35.374918   31531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41485
	I0604 22:04:35.375300   31531 main.go:141] libmachine: () Calling .GetVersion
	I0604 22:04:35.375712   31531 main.go:141] libmachine: Using API Version  1
	I0604 22:04:35.375735   31531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 22:04:35.376024   31531 main.go:141] libmachine: () Calling .GetMachineName
	I0604 22:04:35.376193   31531 main.go:141] libmachine: (ha-207529-m04) Calling .GetState
	I0604 22:04:35.377437   31531 status.go:330] ha-207529-m04 host status = "Stopped" (err=<nil>)
	I0604 22:04:35.377453   31531 status.go:343] host is not running, skipping remaining checks
	I0604 22:04:35.377460   31531 status.go:257] ha-207529-m04 status: &{Name:ha-207529-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (274.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (157.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-207529 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0604 22:05:47.451857   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-207529 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m36.665323448s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (157.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (69.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-207529 --control-plane -v=7 --alsologtostderr
E0604 22:07:57.172693   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/functional-882970/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-207529 --control-plane -v=7 --alsologtostderr: (1m8.50101688s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-207529 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (69.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                    
x
+
TestJSONOutput/start/Command (61.67s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-925428 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-925428 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m1.666346035s)
--- PASS: TestJSONOutput/start/Command (61.67s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-925428 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-925428 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.53s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-925428 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-925428 --output=json --user=testUser: (6.525714891s)
--- PASS: TestJSONOutput/stop/Command (6.53s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-824991 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-824991 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (60.12701ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"576d994d-8bce-403c-9284-80a703c0644e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-824991] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1c6be794-e5fb-42bc-a736-598a139129bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19024"}}
	{"specversion":"1.0","id":"d0b81418-95cb-4da4-a853-59b6bb6df67c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6c95d62e-e6ab-4dd6-89a0-79767934fa1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19024-5817/kubeconfig"}}
	{"specversion":"1.0","id":"827b50fc-d484-40d3-b4ed-657a371b8749","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19024-5817/.minikube"}}
	{"specversion":"1.0","id":"c0a575ef-5872-4918-9cda-804a60407ac8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ca2b962a-95c1-4560-863b-5ffc52c7065c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"47a3d1a6-f722-4007-bb81-7cf33fae19bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-824991" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-824991
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (90.9s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-055284 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-055284 --driver=kvm2  --container-runtime=containerd: (43.606281301s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-057623 --driver=kvm2  --container-runtime=containerd
E0604 22:10:47.452093   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-057623 --driver=kvm2  --container-runtime=containerd: (44.566594354s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-055284
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-057623
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-057623" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-057623
helpers_test.go:175: Cleaning up "first-055284" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-055284
--- PASS: TestMinikubeProfile (90.90s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.15s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-245949 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-245949 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.152158537s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-245949 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-245949 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (33.18s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-259353 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-259353 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (32.17859481s)
--- PASS: TestMountStart/serial/StartWithMountSecond (33.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-259353 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-259353 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-245949 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-259353 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-259353 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-259353
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-259353: (1.275323465s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.04s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-259353
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-259353: (23.041857535s)
--- PASS: TestMountStart/serial/RestartStopped (24.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-259353 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-259353 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (105.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-069803 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0604 22:12:57.172650   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/functional-882970/client.crt: no such file or directory
E0604 22:13:50.500207   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-069803 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m44.884239714s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (105.27s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-069803 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-069803 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-069803 -- rollout status deployment/busybox: (5.015079148s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-069803 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-069803 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-069803 -- exec busybox-fc5497c4f-cscqd -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-069803 -- exec busybox-fc5497c4f-w8w8j -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-069803 -- exec busybox-fc5497c4f-cscqd -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-069803 -- exec busybox-fc5497c4f-w8w8j -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-069803 -- exec busybox-fc5497c4f-cscqd -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-069803 -- exec busybox-fc5497c4f-w8w8j -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.55s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-069803 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-069803 -- exec busybox-fc5497c4f-cscqd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-069803 -- exec busybox-fc5497c4f-cscqd -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-069803 -- exec busybox-fc5497c4f-w8w8j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-069803 -- exec busybox-fc5497c4f-w8w8j -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (40.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-069803 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-069803 -v 3 --alsologtostderr: (39.809953018s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (40.38s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-069803 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 cp testdata/cp-test.txt multinode-069803:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 ssh -n multinode-069803 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 cp multinode-069803:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2739829521/001/cp-test_multinode-069803.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 ssh -n multinode-069803 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 cp multinode-069803:/home/docker/cp-test.txt multinode-069803-m02:/home/docker/cp-test_multinode-069803_multinode-069803-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 ssh -n multinode-069803 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 ssh -n multinode-069803-m02 "sudo cat /home/docker/cp-test_multinode-069803_multinode-069803-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 cp multinode-069803:/home/docker/cp-test.txt multinode-069803-m03:/home/docker/cp-test_multinode-069803_multinode-069803-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 ssh -n multinode-069803 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 ssh -n multinode-069803-m03 "sudo cat /home/docker/cp-test_multinode-069803_multinode-069803-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 cp testdata/cp-test.txt multinode-069803-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 ssh -n multinode-069803-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 cp multinode-069803-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2739829521/001/cp-test_multinode-069803-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 ssh -n multinode-069803-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 cp multinode-069803-m02:/home/docker/cp-test.txt multinode-069803:/home/docker/cp-test_multinode-069803-m02_multinode-069803.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 ssh -n multinode-069803-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 ssh -n multinode-069803 "sudo cat /home/docker/cp-test_multinode-069803-m02_multinode-069803.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 cp multinode-069803-m02:/home/docker/cp-test.txt multinode-069803-m03:/home/docker/cp-test_multinode-069803-m02_multinode-069803-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 ssh -n multinode-069803-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 ssh -n multinode-069803-m03 "sudo cat /home/docker/cp-test_multinode-069803-m02_multinode-069803-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 cp testdata/cp-test.txt multinode-069803-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 ssh -n multinode-069803-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 cp multinode-069803-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2739829521/001/cp-test_multinode-069803-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 ssh -n multinode-069803-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 cp multinode-069803-m03:/home/docker/cp-test.txt multinode-069803:/home/docker/cp-test_multinode-069803-m03_multinode-069803.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 ssh -n multinode-069803-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 ssh -n multinode-069803 "sudo cat /home/docker/cp-test_multinode-069803-m03_multinode-069803.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 cp multinode-069803-m03:/home/docker/cp-test.txt multinode-069803-m02:/home/docker/cp-test_multinode-069803-m03_multinode-069803-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 ssh -n multinode-069803-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 ssh -n multinode-069803-m02 "sudo cat /home/docker/cp-test_multinode-069803-m03_multinode-069803-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.00s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-069803 node stop m03: (1.394409145s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-069803 status: exit status 7 (408.956444ms)

                                                
                                                
-- stdout --
	multinode-069803
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-069803-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-069803-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-069803 status --alsologtostderr: exit status 7 (421.758582ms)

                                                
                                                
-- stdout --
	multinode-069803
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-069803-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-069803-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 22:15:18.723974   39030 out.go:291] Setting OutFile to fd 1 ...
	I0604 22:15:18.724357   39030 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 22:15:18.724367   39030 out.go:304] Setting ErrFile to fd 2...
	I0604 22:15:18.724371   39030 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 22:15:18.724549   39030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19024-5817/.minikube/bin
	I0604 22:15:18.724703   39030 out.go:298] Setting JSON to false
	I0604 22:15:18.724728   39030 mustload.go:65] Loading cluster: multinode-069803
	I0604 22:15:18.724822   39030 notify.go:220] Checking for updates...
	I0604 22:15:18.725107   39030 config.go:182] Loaded profile config "multinode-069803": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
	I0604 22:15:18.725122   39030 status.go:255] checking status of multinode-069803 ...
	I0604 22:15:18.725508   39030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 22:15:18.725547   39030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 22:15:18.745609   39030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40901
	I0604 22:15:18.745976   39030 main.go:141] libmachine: () Calling .GetVersion
	I0604 22:15:18.746578   39030 main.go:141] libmachine: Using API Version  1
	I0604 22:15:18.746604   39030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 22:15:18.747030   39030 main.go:141] libmachine: () Calling .GetMachineName
	I0604 22:15:18.747230   39030 main.go:141] libmachine: (multinode-069803) Calling .GetState
	I0604 22:15:18.749032   39030 status.go:330] multinode-069803 host status = "Running" (err=<nil>)
	I0604 22:15:18.749049   39030 host.go:66] Checking if "multinode-069803" exists ...
	I0604 22:15:18.749448   39030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 22:15:18.749490   39030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 22:15:18.764130   39030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44043
	I0604 22:15:18.764442   39030 main.go:141] libmachine: () Calling .GetVersion
	I0604 22:15:18.764796   39030 main.go:141] libmachine: Using API Version  1
	I0604 22:15:18.764832   39030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 22:15:18.765161   39030 main.go:141] libmachine: () Calling .GetMachineName
	I0604 22:15:18.765322   39030 main.go:141] libmachine: (multinode-069803) Calling .GetIP
	I0604 22:15:18.767951   39030 main.go:141] libmachine: (multinode-069803) DBG | domain multinode-069803 has defined MAC address 52:54:00:b5:e6:06 in network mk-multinode-069803
	I0604 22:15:18.768362   39030 main.go:141] libmachine: (multinode-069803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e6:06", ip: ""} in network mk-multinode-069803: {Iface:virbr1 ExpiryTime:2024-06-04 23:12:50 +0000 UTC Type:0 Mac:52:54:00:b5:e6:06 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:multinode-069803 Clientid:01:52:54:00:b5:e6:06}
	I0604 22:15:18.768384   39030 main.go:141] libmachine: (multinode-069803) DBG | domain multinode-069803 has defined IP address 192.168.39.105 and MAC address 52:54:00:b5:e6:06 in network mk-multinode-069803
	I0604 22:15:18.768577   39030 host.go:66] Checking if "multinode-069803" exists ...
	I0604 22:15:18.768885   39030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 22:15:18.768924   39030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 22:15:18.783075   39030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40903
	I0604 22:15:18.783412   39030 main.go:141] libmachine: () Calling .GetVersion
	I0604 22:15:18.783800   39030 main.go:141] libmachine: Using API Version  1
	I0604 22:15:18.783814   39030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 22:15:18.784127   39030 main.go:141] libmachine: () Calling .GetMachineName
	I0604 22:15:18.784323   39030 main.go:141] libmachine: (multinode-069803) Calling .DriverName
	I0604 22:15:18.784534   39030 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 22:15:18.784569   39030 main.go:141] libmachine: (multinode-069803) Calling .GetSSHHostname
	I0604 22:15:18.787036   39030 main.go:141] libmachine: (multinode-069803) DBG | domain multinode-069803 has defined MAC address 52:54:00:b5:e6:06 in network mk-multinode-069803
	I0604 22:15:18.787405   39030 main.go:141] libmachine: (multinode-069803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e6:06", ip: ""} in network mk-multinode-069803: {Iface:virbr1 ExpiryTime:2024-06-04 23:12:50 +0000 UTC Type:0 Mac:52:54:00:b5:e6:06 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:multinode-069803 Clientid:01:52:54:00:b5:e6:06}
	I0604 22:15:18.787434   39030 main.go:141] libmachine: (multinode-069803) DBG | domain multinode-069803 has defined IP address 192.168.39.105 and MAC address 52:54:00:b5:e6:06 in network mk-multinode-069803
	I0604 22:15:18.787567   39030 main.go:141] libmachine: (multinode-069803) Calling .GetSSHPort
	I0604 22:15:18.787718   39030 main.go:141] libmachine: (multinode-069803) Calling .GetSSHKeyPath
	I0604 22:15:18.787872   39030 main.go:141] libmachine: (multinode-069803) Calling .GetSSHUsername
	I0604 22:15:18.788091   39030 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19024-5817/.minikube/machines/multinode-069803/id_rsa Username:docker}
	I0604 22:15:18.876479   39030 ssh_runner.go:195] Run: systemctl --version
	I0604 22:15:18.882719   39030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0604 22:15:18.901606   39030 kubeconfig.go:125] found "multinode-069803" server: "https://192.168.39.105:8443"
	I0604 22:15:18.901628   39030 api_server.go:166] Checking apiserver status ...
	I0604 22:15:18.901655   39030 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0604 22:15:18.915148   39030 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1167/cgroup
	W0604 22:15:18.924337   39030 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1167/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0604 22:15:18.924392   39030 ssh_runner.go:195] Run: ls
	I0604 22:15:18.928438   39030 api_server.go:253] Checking apiserver healthz at https://192.168.39.105:8443/healthz ...
	I0604 22:15:18.933465   39030 api_server.go:279] https://192.168.39.105:8443/healthz returned 200:
	ok
	I0604 22:15:18.933482   39030 status.go:422] multinode-069803 apiserver status = Running (err=<nil>)
	I0604 22:15:18.933490   39030 status.go:257] multinode-069803 status: &{Name:multinode-069803 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0604 22:15:18.933504   39030 status.go:255] checking status of multinode-069803-m02 ...
	I0604 22:15:18.933796   39030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 22:15:18.933831   39030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 22:15:18.948706   39030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37705
	I0604 22:15:18.949050   39030 main.go:141] libmachine: () Calling .GetVersion
	I0604 22:15:18.949506   39030 main.go:141] libmachine: Using API Version  1
	I0604 22:15:18.949521   39030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 22:15:18.949806   39030 main.go:141] libmachine: () Calling .GetMachineName
	I0604 22:15:18.949999   39030 main.go:141] libmachine: (multinode-069803-m02) Calling .GetState
	I0604 22:15:18.951567   39030 status.go:330] multinode-069803-m02 host status = "Running" (err=<nil>)
	I0604 22:15:18.951580   39030 host.go:66] Checking if "multinode-069803-m02" exists ...
	I0604 22:15:18.951861   39030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 22:15:18.951891   39030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 22:15:18.967423   39030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36897
	I0604 22:15:18.967795   39030 main.go:141] libmachine: () Calling .GetVersion
	I0604 22:15:18.968230   39030 main.go:141] libmachine: Using API Version  1
	I0604 22:15:18.968252   39030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 22:15:18.968567   39030 main.go:141] libmachine: () Calling .GetMachineName
	I0604 22:15:18.968725   39030 main.go:141] libmachine: (multinode-069803-m02) Calling .GetIP
	I0604 22:15:18.971204   39030 main.go:141] libmachine: (multinode-069803-m02) DBG | domain multinode-069803-m02 has defined MAC address 52:54:00:93:ea:8b in network mk-multinode-069803
	I0604 22:15:18.971587   39030 main.go:141] libmachine: (multinode-069803-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:ea:8b", ip: ""} in network mk-multinode-069803: {Iface:virbr1 ExpiryTime:2024-06-04 23:13:55 +0000 UTC Type:0 Mac:52:54:00:93:ea:8b Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-069803-m02 Clientid:01:52:54:00:93:ea:8b}
	I0604 22:15:18.971619   39030 main.go:141] libmachine: (multinode-069803-m02) DBG | domain multinode-069803-m02 has defined IP address 192.168.39.176 and MAC address 52:54:00:93:ea:8b in network mk-multinode-069803
	I0604 22:15:18.971765   39030 host.go:66] Checking if "multinode-069803-m02" exists ...
	I0604 22:15:18.972058   39030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 22:15:18.972087   39030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 22:15:18.986116   39030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38503
	I0604 22:15:18.986497   39030 main.go:141] libmachine: () Calling .GetVersion
	I0604 22:15:18.986977   39030 main.go:141] libmachine: Using API Version  1
	I0604 22:15:18.986995   39030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 22:15:18.987265   39030 main.go:141] libmachine: () Calling .GetMachineName
	I0604 22:15:18.987460   39030 main.go:141] libmachine: (multinode-069803-m02) Calling .DriverName
	I0604 22:15:18.987694   39030 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 22:15:18.987721   39030 main.go:141] libmachine: (multinode-069803-m02) Calling .GetSSHHostname
	I0604 22:15:18.990325   39030 main.go:141] libmachine: (multinode-069803-m02) DBG | domain multinode-069803-m02 has defined MAC address 52:54:00:93:ea:8b in network mk-multinode-069803
	I0604 22:15:18.990691   39030 main.go:141] libmachine: (multinode-069803-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:ea:8b", ip: ""} in network mk-multinode-069803: {Iface:virbr1 ExpiryTime:2024-06-04 23:13:55 +0000 UTC Type:0 Mac:52:54:00:93:ea:8b Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-069803-m02 Clientid:01:52:54:00:93:ea:8b}
	I0604 22:15:18.990723   39030 main.go:141] libmachine: (multinode-069803-m02) DBG | domain multinode-069803-m02 has defined IP address 192.168.39.176 and MAC address 52:54:00:93:ea:8b in network mk-multinode-069803
	I0604 22:15:18.990854   39030 main.go:141] libmachine: (multinode-069803-m02) Calling .GetSSHPort
	I0604 22:15:18.991041   39030 main.go:141] libmachine: (multinode-069803-m02) Calling .GetSSHKeyPath
	I0604 22:15:18.991191   39030 main.go:141] libmachine: (multinode-069803-m02) Calling .GetSSHUsername
	I0604 22:15:18.991329   39030 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19024-5817/.minikube/machines/multinode-069803-m02/id_rsa Username:docker}
	I0604 22:15:19.071551   39030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0604 22:15:19.085685   39030 status.go:257] multinode-069803-m02 status: &{Name:multinode-069803-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0604 22:15:19.085733   39030 status.go:255] checking status of multinode-069803-m03 ...
	I0604 22:15:19.086034   39030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 22:15:19.086069   39030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 22:15:19.102194   39030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35859
	I0604 22:15:19.102645   39030 main.go:141] libmachine: () Calling .GetVersion
	I0604 22:15:19.103140   39030 main.go:141] libmachine: Using API Version  1
	I0604 22:15:19.103161   39030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 22:15:19.103449   39030 main.go:141] libmachine: () Calling .GetMachineName
	I0604 22:15:19.103617   39030 main.go:141] libmachine: (multinode-069803-m03) Calling .GetState
	I0604 22:15:19.105172   39030 status.go:330] multinode-069803-m03 host status = "Stopped" (err=<nil>)
	I0604 22:15:19.105188   39030 status.go:343] host is not running, skipping remaining checks
	I0604 22:15:19.105195   39030 status.go:257] multinode-069803-m03 status: &{Name:multinode-069803-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.23s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (23.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-069803 node start m03 -v=7 --alsologtostderr: (22.956411384s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (23.55s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (288.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-069803
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-069803
E0604 22:15:47.451867   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt: no such file or directory
E0604 22:17:57.172350   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/functional-882970/client.crt: no such file or directory
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-069803: (3m4.269928177s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-069803 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-069803 --wait=true -v=8 --alsologtostderr: (1m44.271667191s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-069803
--- PASS: TestMultiNode/serial/RestartKeepsNodes (288.63s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-069803 node delete m03: (1.773325381s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (183.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 stop
E0604 22:20:47.452294   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt: no such file or directory
E0604 22:21:00.219588   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/functional-882970/client.crt: no such file or directory
E0604 22:22:57.175627   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/functional-882970/client.crt: no such file or directory
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-069803 stop: (3m2.950572227s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-069803 status: exit status 7 (81.805926ms)

                                                
                                                
-- stdout --
	multinode-069803
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-069803-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-069803 status --alsologtostderr: exit status 7 (81.372311ms)

                                                
                                                
-- stdout --
	multinode-069803
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-069803-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 22:23:36.616498   42055 out.go:291] Setting OutFile to fd 1 ...
	I0604 22:23:36.616871   42055 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 22:23:36.616920   42055 out.go:304] Setting ErrFile to fd 2...
	I0604 22:23:36.616937   42055 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 22:23:36.617419   42055 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19024-5817/.minikube/bin
	I0604 22:23:36.618069   42055 out.go:298] Setting JSON to false
	I0604 22:23:36.618132   42055 mustload.go:65] Loading cluster: multinode-069803
	I0604 22:23:36.618160   42055 notify.go:220] Checking for updates...
	I0604 22:23:36.618567   42055 config.go:182] Loaded profile config "multinode-069803": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
	I0604 22:23:36.618582   42055 status.go:255] checking status of multinode-069803 ...
	I0604 22:23:36.618959   42055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 22:23:36.618991   42055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 22:23:36.638627   42055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39371
	I0604 22:23:36.638978   42055 main.go:141] libmachine: () Calling .GetVersion
	I0604 22:23:36.639469   42055 main.go:141] libmachine: Using API Version  1
	I0604 22:23:36.639494   42055 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 22:23:36.639858   42055 main.go:141] libmachine: () Calling .GetMachineName
	I0604 22:23:36.640034   42055 main.go:141] libmachine: (multinode-069803) Calling .GetState
	I0604 22:23:36.641548   42055 status.go:330] multinode-069803 host status = "Stopped" (err=<nil>)
	I0604 22:23:36.641563   42055 status.go:343] host is not running, skipping remaining checks
	I0604 22:23:36.641569   42055 status.go:257] multinode-069803 status: &{Name:multinode-069803 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0604 22:23:36.641585   42055 status.go:255] checking status of multinode-069803-m02 ...
	I0604 22:23:36.641859   42055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0604 22:23:36.641894   42055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0604 22:23:36.655809   42055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33283
	I0604 22:23:36.656193   42055 main.go:141] libmachine: () Calling .GetVersion
	I0604 22:23:36.656691   42055 main.go:141] libmachine: Using API Version  1
	I0604 22:23:36.656713   42055 main.go:141] libmachine: () Calling .SetConfigRaw
	I0604 22:23:36.656996   42055 main.go:141] libmachine: () Calling .GetMachineName
	I0604 22:23:36.657154   42055 main.go:141] libmachine: (multinode-069803-m02) Calling .GetState
	I0604 22:23:36.658510   42055 status.go:330] multinode-069803-m02 host status = "Stopped" (err=<nil>)
	I0604 22:23:36.658525   42055 status.go:343] host is not running, skipping remaining checks
	I0604 22:23:36.658533   42055 status.go:257] multinode-069803-m02 status: &{Name:multinode-069803-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (183.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (77.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-069803 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-069803 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m17.273018996s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-069803 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (77.78s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (41.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-069803
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-069803-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-069803-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (61.134694ms)

                                                
                                                
-- stdout --
	* [multinode-069803-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19024
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19024-5817/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19024-5817/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-069803-m02' is duplicated with machine name 'multinode-069803-m02' in profile 'multinode-069803'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-069803-m03 --driver=kvm2  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-069803-m03 --driver=kvm2  --container-runtime=containerd: (40.757197552s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-069803
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-069803: exit status 80 (200.424456ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-069803 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-069803-m03 already exists in multinode-069803-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-069803-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (41.86s)

                                                
                                    
x
+
TestPreload (393.11s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-211004 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0604 22:25:47.452257   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt: no such file or directory
E0604 22:27:57.172355   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/functional-882970/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-211004 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (3m45.196983748s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-211004 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-211004 image pull gcr.io/k8s-minikube/busybox: (3.184414277s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-211004
E0604 22:30:30.501712   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt: no such file or directory
E0604 22:30:47.452346   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-211004: (1m31.522417289s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-211004 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-211004 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (1m12.174025313s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-211004 image list
helpers_test.go:175: Cleaning up "test-preload-211004" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-211004
--- PASS: TestPreload (393.11s)

                                                
                                    
x
+
TestScheduledStopUnix (112.5s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-558385 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-558385 --memory=2048 --driver=kvm2  --container-runtime=containerd: (40.980849837s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-558385 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-558385 -n scheduled-stop-558385
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-558385 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-558385 --cancel-scheduled
E0604 22:32:57.175917   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/functional-882970/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-558385 -n scheduled-stop-558385
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-558385
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-558385 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-558385
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-558385: exit status 7 (63.676576ms)

                                                
                                                
-- stdout --
	scheduled-stop-558385
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-558385 -n scheduled-stop-558385
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-558385 -n scheduled-stop-558385: exit status 7 (63.772275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-558385" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-558385
--- PASS: TestScheduledStopUnix (112.50s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (205.33s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1167347873 start -p running-upgrade-632034 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1167347873 start -p running-upgrade-632034 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m3.14552051s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-632034 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-632034 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m17.047133105s)
helpers_test.go:175: Cleaning up "running-upgrade-632034" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-632034
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-632034: (1.159744488s)
--- PASS: TestRunningBinaryUpgrade (205.33s)

                                                
                                    
x
+
TestKubernetesUpgrade (162.78s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-754873 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-754873 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (57.390092474s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-754873
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-754873: (1.536122358s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-754873 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-754873 status --format={{.Host}}: exit status 7 (73.95384ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-754873 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0604 22:37:40.220113   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/functional-882970/client.crt: no such file or directory
E0604 22:37:57.175776   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/functional-882970/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-754873 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m11.1974006s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-754873 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-754873 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-754873 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (84.219444ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-754873] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19024
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19024-5817/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19024-5817/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-754873
	    minikube start -p kubernetes-upgrade-754873 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7548732 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.1, by running:
	    
	    minikube start -p kubernetes-upgrade-754873 --kubernetes-version=v1.30.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-754873 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-754873 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (31.035044004s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-754873" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-754873
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-754873: (1.405724169s)
--- PASS: TestKubernetesUpgrade (162.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-932963 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-932963 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (79.900409ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-932963] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19024
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19024-5817/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19024-5817/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (91.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-932963 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-932963 --driver=kvm2  --container-runtime=containerd: (1m30.898163711s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-932963 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (91.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-053421 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-053421 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (95.986337ms)

                                                
                                                
-- stdout --
	* [false-053421] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19024
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19024-5817/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19024-5817/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0604 22:34:06.343898   46937 out.go:291] Setting OutFile to fd 1 ...
	I0604 22:34:06.344174   46937 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 22:34:06.344186   46937 out.go:304] Setting ErrFile to fd 2...
	I0604 22:34:06.344192   46937 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 22:34:06.344427   46937 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19024-5817/.minikube/bin
	I0604 22:34:06.345092   46937 out.go:298] Setting JSON to false
	I0604 22:34:06.346029   46937 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":4587,"bootTime":1717535859,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0604 22:34:06.346093   46937 start.go:139] virtualization: kvm guest
	I0604 22:34:06.348634   46937 out.go:177] * [false-053421] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0604 22:34:06.349995   46937 out.go:177]   - MINIKUBE_LOCATION=19024
	I0604 22:34:06.349993   46937 notify.go:220] Checking for updates...
	I0604 22:34:06.351345   46937 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 22:34:06.352694   46937 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19024-5817/kubeconfig
	I0604 22:34:06.354096   46937 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19024-5817/.minikube
	I0604 22:34:06.355395   46937 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0604 22:34:06.356544   46937 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0604 22:34:06.358077   46937 config.go:182] Loaded profile config "NoKubernetes-932963": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
	I0604 22:34:06.358203   46937 config.go:182] Loaded profile config "force-systemd-env-948969": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
	I0604 22:34:06.358308   46937 config.go:182] Loaded profile config "offline-containerd-898790": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
	I0604 22:34:06.358400   46937 driver.go:392] Setting default libvirt URI to qemu:///system
	I0604 22:34:06.394295   46937 out.go:177] * Using the kvm2 driver based on user configuration
	I0604 22:34:06.395557   46937 start.go:297] selected driver: kvm2
	I0604 22:34:06.395570   46937 start.go:901] validating driver "kvm2" against <nil>
	I0604 22:34:06.395581   46937 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 22:34:06.397403   46937 out.go:177] 
	W0604 22:34:06.398669   46937 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0604 22:34:06.399794   46937 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-053421 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-053421

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-053421

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-053421

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-053421

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-053421

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-053421

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-053421

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-053421

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-053421

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-053421

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053421"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053421"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053421"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-053421

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053421"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053421"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-053421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-053421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-053421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-053421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-053421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-053421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-053421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-053421" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053421"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053421"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053421"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053421"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053421"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-053421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-053421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-053421" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053421"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053421"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053421"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053421"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053421"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-053421

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053421"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053421"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053421"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053421"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053421"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053421"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053421"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053421"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053421"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053421"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053421"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053421"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053421"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053421"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053421"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053421"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053421"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053421"

                                                
                                                
----------------------- debugLogs end: false-053421 [took: 2.544074097s] --------------------------------
helpers_test.go:175: Cleaning up "false-053421" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-053421
--- PASS: TestNetworkPlugins/group/false (2.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (56.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-932963 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0604 22:35:47.452229   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-932963 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (54.581299943s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-932963 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-932963 status -o json: exit status 2 (241.177917ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-932963","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-932963
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-932963: (1.413818374s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (56.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (52.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-932963 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-932963 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (52.195706104s)
--- PASS: TestNoKubernetes/serial/Start (52.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-932963 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-932963 "sudo systemctl is-active --quiet service kubelet": exit status 1 (199.271714ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-932963
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-932963: (1.708791155s)
--- PASS: TestNoKubernetes/serial/Stop (1.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (40.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-932963 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-932963 --driver=kvm2  --container-runtime=containerd: (40.317788236s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (40.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-932963 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-932963 "sudo systemctl is-active --quiet service kubelet": exit status 1 (210.077796ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.25s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (188.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.968792149 start -p stopped-upgrade-054032 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.968792149 start -p stopped-upgrade-054032 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m9.916290713s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.968792149 -p stopped-upgrade-054032 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.968792149 -p stopped-upgrade-054032 stop: (1.348825287s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-054032 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-054032 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m56.986947479s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (188.25s)

                                                
                                    
x
+
TestPause/serial/Start (140.29s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-279119 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-279119 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (2m20.292779296s)
--- PASS: TestPause/serial/Start (140.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (121.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-053421 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-053421 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (2m1.652464409s)
--- PASS: TestNetworkPlugins/group/auto/Start (121.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (116.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-053421 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
E0604 22:40:47.452434   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-053421 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m56.526081163s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (116.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-054032
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (90.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-053421 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-053421 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m30.016896617s)
--- PASS: TestNetworkPlugins/group/calico/Start (90.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-053421 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-053421 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-9hrrc" [18af2543-b3df-4755-b176-d90808901e86] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-9hrrc" [18af2543-b3df-4755-b176-d90808901e86] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004237303s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-053421 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-053421 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-053421 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (39.18s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-279119 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-279119 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (39.157446846s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (39.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (85.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-053421 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-053421 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m25.312442903s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (85.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-5b62g" [16ec52b7-09fb-41c1-83a4-dac7e77b0b29] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005324904s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-053421 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-053421 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-k5vsh" [173e62c7-6589-4724-afe0-0407b2dc2502] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-k5vsh" [173e62c7-6589-4724-afe0-0407b2dc2502] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.00442476s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-053421 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-053421 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-053421 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestPause/serial/Pause (0.78s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-279119 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.78s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.26s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-279119 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-279119 --output=json --layout=cluster: exit status 2 (262.036588ms)

                                                
                                                
-- stdout --
	{"Name":"pause-279119","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-279119","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.26s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-279119 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.74s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.02s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-279119 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-279119 --alsologtostderr -v=5: (1.022870094s)
--- PASS: TestPause/serial/PauseAgain (1.02s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.08s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-279119 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-279119 --alsologtostderr -v=5: (1.076679191s)
--- PASS: TestPause/serial/DeletePaused (1.08s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.54s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (66.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-053421 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-053421 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m6.369051793s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (66.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (104.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-053421 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-053421 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m44.552302248s)
--- PASS: TestNetworkPlugins/group/flannel/Start (104.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-sn96q" [0c7ffb3c-abc1-4fbb-9f51-67173c44c0ec] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006969251s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-053421 "pgrep -a kubelet"
E0604 22:42:57.172831   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/functional-882970/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-053421 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-k6gcd" [b887ee35-8595-4d63-9b7a-00e1e89c36df] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-k6gcd" [b887ee35-8595-4d63-9b7a-00e1e89c36df] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004529501s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-053421 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-053421 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-053421 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (62.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-053421 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-053421 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m2.04878837s)
--- PASS: TestNetworkPlugins/group/bridge/Start (62.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-053421 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-053421 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-fz2zw" [f8416480-7661-4e3b-82bd-9f60c1951a38] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-fz2zw" [f8416480-7661-4e3b-82bd-9f60c1951a38] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004117199s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-053421 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-053421 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-hx7rv" [26e82787-af0f-419e-8b02-c2923b64d4cd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-hx7rv" [26e82787-af0f-419e-8b02-c2923b64d4cd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.007478661s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-053421 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-053421 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-053421 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-053421 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-053421 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-053421 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (185.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-420000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-420000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (3m5.483079998s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (185.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (128.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-117224 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-117224 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.1: (2m8.625061625s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (128.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-9btdz" [136ff21f-caf1-473f-b7c7-ef25fe6bf973] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005227601s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-053421 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-053421 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-9k9vj" [28347797-92ae-457f-9620-67f08c98c0e7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-9k9vj" [28347797-92ae-457f-9620-67f08c98c0e7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.014210366s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-053421 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-053421 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-9ng6k" [82d1597d-d551-43cc-b43b-a17efeb761c3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-9ng6k" [82d1597d-d551-43cc-b43b-a17efeb761c3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004853651s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (32.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-053421 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-053421 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.169502969s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-053421 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-053421 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.174377992s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-053421 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (32.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-053421 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-053421 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-053421 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (113.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-487005 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-487005 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.1: (1m53.14130674s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (113.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-053421 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-053421 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-003850 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.1
E0604 22:45:47.452650   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-003850 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.1: (1m3.686163556s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-117224 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a5588b4d-6859-40fd-98ed-9d10dfec6d0b] Pending
helpers_test.go:344: "busybox" [a5588b4d-6859-40fd-98ed-9d10dfec6d0b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a5588b4d-6859-40fd-98ed-9d10dfec6d0b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.004082894s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-117224 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-117224 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-117224 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-117224 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-117224 --alsologtostderr -v=3: (1m31.746272125s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-003850 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0d4ffa51-50c4-4808-8a75-f3ceb5c46a9e] Pending
helpers_test.go:344: "busybox" [0d4ffa51-50c4-4808-8a75-f3ceb5c46a9e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0604 22:46:34.023091   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/auto-053421/client.crt: no such file or directory
E0604 22:46:34.028346   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/auto-053421/client.crt: no such file or directory
E0604 22:46:34.038585   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/auto-053421/client.crt: no such file or directory
E0604 22:46:34.058834   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/auto-053421/client.crt: no such file or directory
E0604 22:46:34.099057   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/auto-053421/client.crt: no such file or directory
E0604 22:46:34.179321   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/auto-053421/client.crt: no such file or directory
E0604 22:46:34.339737   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/auto-053421/client.crt: no such file or directory
helpers_test.go:344: "busybox" [0d4ffa51-50c4-4808-8a75-f3ceb5c46a9e] Running
E0604 22:46:34.660158   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/auto-053421/client.crt: no such file or directory
E0604 22:46:35.300625   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/auto-053421/client.crt: no such file or directory
E0604 22:46:36.581313   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/auto-053421/client.crt: no such file or directory
E0604 22:46:39.141751   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/auto-053421/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004398992s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-003850 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-003850 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-003850 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-003850 --alsologtostderr -v=3
E0604 22:46:44.262772   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/auto-053421/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-003850 --alsologtostderr -v=3: (1m31.671591935s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-487005 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [04950ebc-1249-4cdd-8669-fab9afc172b7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [04950ebc-1249-4cdd-8669-fab9afc172b7] Running
E0604 22:46:54.503699   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/auto-053421/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004033858s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-487005 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-487005 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-487005 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-487005 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-487005 --alsologtostderr -v=3: (1m31.670274657s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-420000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [82538895-c3ed-4b78-aa35-e9b60502b0c0] Pending
E0604 22:47:03.565770   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/kindnet-053421/client.crt: no such file or directory
E0604 22:47:03.571034   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/kindnet-053421/client.crt: no such file or directory
E0604 22:47:03.581283   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/kindnet-053421/client.crt: no such file or directory
E0604 22:47:03.601575   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/kindnet-053421/client.crt: no such file or directory
E0604 22:47:03.641846   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/kindnet-053421/client.crt: no such file or directory
E0604 22:47:03.722149   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/kindnet-053421/client.crt: no such file or directory
E0604 22:47:03.882557   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/kindnet-053421/client.crt: no such file or directory
E0604 22:47:04.202657   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/kindnet-053421/client.crt: no such file or directory
helpers_test.go:344: "busybox" [82538895-c3ed-4b78-aa35-e9b60502b0c0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0604 22:47:04.843196   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/kindnet-053421/client.crt: no such file or directory
E0604 22:47:06.123365   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/kindnet-053421/client.crt: no such file or directory
helpers_test.go:344: "busybox" [82538895-c3ed-4b78-aa35-e9b60502b0c0] Running
E0604 22:47:08.684050   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/kindnet-053421/client.crt: no such file or directory
E0604 22:47:10.502696   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004105223s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-420000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-420000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0604 22:47:13.805172   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/kindnet-053421/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-420000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (91.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-420000 --alsologtostderr -v=3
E0604 22:47:14.984317   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/auto-053421/client.crt: no such file or directory
E0604 22:47:24.045344   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/kindnet-053421/client.crt: no such file or directory
E0604 22:47:44.526178   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/kindnet-053421/client.crt: no such file or directory
E0604 22:47:51.110718   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/calico-053421/client.crt: no such file or directory
E0604 22:47:51.115984   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/calico-053421/client.crt: no such file or directory
E0604 22:47:51.126247   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/calico-053421/client.crt: no such file or directory
E0604 22:47:51.146499   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/calico-053421/client.crt: no such file or directory
E0604 22:47:51.186794   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/calico-053421/client.crt: no such file or directory
E0604 22:47:51.267130   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/calico-053421/client.crt: no such file or directory
E0604 22:47:51.428220   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/calico-053421/client.crt: no such file or directory
E0604 22:47:51.749247   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/calico-053421/client.crt: no such file or directory
E0604 22:47:52.389645   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/calico-053421/client.crt: no such file or directory
E0604 22:47:53.670535   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/calico-053421/client.crt: no such file or directory
E0604 22:47:55.944696   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/auto-053421/client.crt: no such file or directory
E0604 22:47:56.231296   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/calico-053421/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-420000 --alsologtostderr -v=3: (1m31.816567462s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (91.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-117224 -n embed-certs-117224
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-117224 -n embed-certs-117224: exit status 7 (72.062386ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-117224 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (321.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-117224 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.1
E0604 22:47:57.172623   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/functional-882970/client.crt: no such file or directory
E0604 22:48:01.352324   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/calico-053421/client.crt: no such file or directory
E0604 22:48:11.593494   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/calico-053421/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-117224 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.1: (5m21.232377017s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-117224 -n embed-certs-117224
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (321.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-003850 -n default-k8s-diff-port-003850
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-003850 -n default-k8s-diff-port-003850: exit status 7 (63.364945ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-003850 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (319.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-003850 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.1
E0604 22:48:25.487349   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/kindnet-053421/client.crt: no such file or directory
E0604 22:48:27.701651   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/custom-flannel-053421/client.crt: no such file or directory
E0604 22:48:27.706955   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/custom-flannel-053421/client.crt: no such file or directory
E0604 22:48:27.717207   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/custom-flannel-053421/client.crt: no such file or directory
E0604 22:48:27.737486   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/custom-flannel-053421/client.crt: no such file or directory
E0604 22:48:27.778382   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/custom-flannel-053421/client.crt: no such file or directory
E0604 22:48:27.858699   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/custom-flannel-053421/client.crt: no such file or directory
E0604 22:48:28.019202   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/custom-flannel-053421/client.crt: no such file or directory
E0604 22:48:28.339374   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/custom-flannel-053421/client.crt: no such file or directory
E0604 22:48:28.980105   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/custom-flannel-053421/client.crt: no such file or directory
E0604 22:48:30.260614   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/custom-flannel-053421/client.crt: no such file or directory
E0604 22:48:32.074098   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/calico-053421/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-003850 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.1: (5m18.972603165s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-003850 -n default-k8s-diff-port-003850
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (319.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-487005 -n no-preload-487005
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-487005 -n no-preload-487005: exit status 7 (62.63335ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-487005 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (322.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-487005 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.1
E0604 22:48:32.821403   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/custom-flannel-053421/client.crt: no such file or directory
E0604 22:48:36.452357   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/enable-default-cni-053421/client.crt: no such file or directory
E0604 22:48:36.457621   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/enable-default-cni-053421/client.crt: no such file or directory
E0604 22:48:36.467773   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/enable-default-cni-053421/client.crt: no such file or directory
E0604 22:48:36.488163   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/enable-default-cni-053421/client.crt: no such file or directory
E0604 22:48:36.528480   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/enable-default-cni-053421/client.crt: no such file or directory
E0604 22:48:36.609182   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/enable-default-cni-053421/client.crt: no such file or directory
E0604 22:48:36.769677   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/enable-default-cni-053421/client.crt: no such file or directory
E0604 22:48:37.090606   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/enable-default-cni-053421/client.crt: no such file or directory
E0604 22:48:37.731222   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/enable-default-cni-053421/client.crt: no such file or directory
E0604 22:48:37.942142   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/custom-flannel-053421/client.crt: no such file or directory
E0604 22:48:39.012261   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/enable-default-cni-053421/client.crt: no such file or directory
E0604 22:48:41.572750   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/enable-default-cni-053421/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-487005 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.1: (5m22.657959064s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-487005 -n no-preload-487005
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (322.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-420000 -n old-k8s-version-420000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-420000 -n old-k8s-version-420000: exit status 7 (89.532358ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-420000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (518.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-420000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
E0604 22:48:46.693867   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/enable-default-cni-053421/client.crt: no such file or directory
E0604 22:48:48.182872   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/custom-flannel-053421/client.crt: no such file or directory
E0604 22:48:56.934344   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/enable-default-cni-053421/client.crt: no such file or directory
E0604 22:49:08.663789   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/custom-flannel-053421/client.crt: no such file or directory
E0604 22:49:13.035068   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/calico-053421/client.crt: no such file or directory
E0604 22:49:17.415145   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/enable-default-cni-053421/client.crt: no such file or directory
E0604 22:49:17.865150   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/auto-053421/client.crt: no such file or directory
E0604 22:49:23.210726   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/flannel-053421/client.crt: no such file or directory
E0604 22:49:23.216035   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/flannel-053421/client.crt: no such file or directory
E0604 22:49:23.226151   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/flannel-053421/client.crt: no such file or directory
E0604 22:49:23.246863   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/flannel-053421/client.crt: no such file or directory
E0604 22:49:23.287305   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/flannel-053421/client.crt: no such file or directory
E0604 22:49:23.367658   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/flannel-053421/client.crt: no such file or directory
E0604 22:49:23.527855   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/flannel-053421/client.crt: no such file or directory
E0604 22:49:23.848083   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/flannel-053421/client.crt: no such file or directory
E0604 22:49:24.488717   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/flannel-053421/client.crt: no such file or directory
E0604 22:49:25.768941   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/flannel-053421/client.crt: no such file or directory
E0604 22:49:26.895888   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/bridge-053421/client.crt: no such file or directory
E0604 22:49:26.901160   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/bridge-053421/client.crt: no such file or directory
E0604 22:49:26.911401   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/bridge-053421/client.crt: no such file or directory
E0604 22:49:26.931740   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/bridge-053421/client.crt: no such file or directory
E0604 22:49:26.972199   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/bridge-053421/client.crt: no such file or directory
E0604 22:49:27.052597   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/bridge-053421/client.crt: no such file or directory
E0604 22:49:27.213167   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/bridge-053421/client.crt: no such file or directory
E0604 22:49:27.533747   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/bridge-053421/client.crt: no such file or directory
E0604 22:49:28.174968   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/bridge-053421/client.crt: no such file or directory
E0604 22:49:28.329711   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/flannel-053421/client.crt: no such file or directory
E0604 22:49:29.455944   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/bridge-053421/client.crt: no such file or directory
E0604 22:49:32.016571   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/bridge-053421/client.crt: no such file or directory
E0604 22:49:33.450220   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/flannel-053421/client.crt: no such file or directory
E0604 22:49:37.136804   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/bridge-053421/client.crt: no such file or directory
E0604 22:49:43.691273   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/flannel-053421/client.crt: no such file or directory
E0604 22:49:47.376956   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/bridge-053421/client.crt: no such file or directory
E0604 22:49:47.408197   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/kindnet-053421/client.crt: no such file or directory
E0604 22:49:49.624837   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/custom-flannel-053421/client.crt: no such file or directory
E0604 22:49:58.375376   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/enable-default-cni-053421/client.crt: no such file or directory
E0604 22:50:04.171972   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/flannel-053421/client.crt: no such file or directory
E0604 22:50:07.857329   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/bridge-053421/client.crt: no such file or directory
E0604 22:50:34.955766   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/calico-053421/client.crt: no such file or directory
E0604 22:50:45.133062   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/flannel-053421/client.crt: no such file or directory
E0604 22:50:47.452317   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/addons-450158/client.crt: no such file or directory
E0604 22:50:48.817551   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/bridge-053421/client.crt: no such file or directory
E0604 22:51:11.545513   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/custom-flannel-053421/client.crt: no such file or directory
E0604 22:51:20.295942   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/enable-default-cni-053421/client.crt: no such file or directory
E0604 22:51:34.023888   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/auto-053421/client.crt: no such file or directory
E0604 22:52:01.705715   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/auto-053421/client.crt: no such file or directory
E0604 22:52:03.565242   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/kindnet-053421/client.crt: no such file or directory
E0604 22:52:07.053622   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/flannel-053421/client.crt: no such file or directory
E0604 22:52:10.737786   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/bridge-053421/client.crt: no such file or directory
E0604 22:52:31.249279   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/kindnet-053421/client.crt: no such file or directory
E0604 22:52:51.110713   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/calico-053421/client.crt: no such file or directory
E0604 22:52:57.172621   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/functional-882970/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-420000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (8m37.98330351s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-420000 -n old-k8s-version-420000
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (518.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-mgc8r" [d83e810d-3877-4cb4-9f0e-fd0d2963541e] Running
E0604 22:53:18.796729   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/calico-053421/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005047253s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-mgc8r" [d83e810d-3877-4cb4-9f0e-fd0d2963541e] Running
E0604 22:53:27.701915   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/custom-flannel-053421/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004091602s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-117224 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-117224 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-117224 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-117224 -n embed-certs-117224
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-117224 -n embed-certs-117224: exit status 2 (264.584827ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-117224 -n embed-certs-117224
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-117224 -n embed-certs-117224: exit status 2 (258.211192ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-117224 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-117224 -n embed-certs-117224
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-117224 -n embed-certs-117224
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (18.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-v2x4z" [bbf49d65-fb90-4e8f-b685-6158ee053b20] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-779776cb65-v2x4z" [bbf49d65-fb90-4e8f-b685-6158ee053b20] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 18.00411739s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (18.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-285574 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.1
E0604 22:53:36.452418   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/enable-default-cni-053421/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-285574 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.1: (1m0.995556409s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (61.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-v2x4z" [bbf49d65-fb90-4e8f-b685-6158ee053b20] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008963776s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-003850 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-003850 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-003850 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-003850 -n default-k8s-diff-port-003850
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-003850 -n default-k8s-diff-port-003850: exit status 2 (243.179088ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-003850 -n default-k8s-diff-port-003850
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-003850 -n default-k8s-diff-port-003850: exit status 2 (243.196011ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-003850 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-003850 -n default-k8s-diff-port-003850
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-003850 -n default-k8s-diff-port-003850
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0604 22:53:55.386405   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/custom-flannel-053421/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-779776cb65-dz6lb" [c2ad358e-da3b-4024-98fa-9f2dd9e74bb0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004044892s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-dz6lb" [c2ad358e-da3b-4024-98fa-9f2dd9e74bb0] Running
E0604 22:54:04.136323   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/enable-default-cni-053421/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00491315s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-487005 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-487005 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-487005 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-487005 --alsologtostderr -v=1: (1.119610438s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-487005 -n no-preload-487005
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-487005 -n no-preload-487005: exit status 2 (255.123262ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-487005 -n no-preload-487005
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-487005 -n no-preload-487005: exit status 2 (239.883256ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-487005 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 unpause -p no-preload-487005 --alsologtostderr -v=1: (1.240698599s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-487005 -n no-preload-487005
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-487005 -n no-preload-487005
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-285574 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (6.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-285574 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-285574 --alsologtostderr -v=3: (6.640214356s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (6.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-285574 -n newest-cni-285574
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-285574 -n newest-cni-285574: exit status 7 (63.892444ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-285574 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (32.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-285574 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.1
E0604 22:54:50.894340   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/flannel-053421/client.crt: no such file or directory
E0604 22:54:54.578563   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/bridge-053421/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-285574 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.1: (31.996928345s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-285574 -n newest-cni-285574
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (32.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-285574 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-285574 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-285574 -n newest-cni-285574
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-285574 -n newest-cni-285574: exit status 2 (232.811165ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-285574 -n newest-cni-285574
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-285574 -n newest-cni-285574: exit status 2 (241.353387ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-285574 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-285574 -n newest-cni-285574
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-285574 -n newest-cni-285574
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-sj4cg" [45decc94-8777-4243-8e60-4be1e46d9d04] Running
E0604 22:57:29.489646   13295 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19024-5817/.minikube/profiles/no-preload-487005/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004369928s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-sj4cg" [45decc94-8777-4243-8e60-4be1e46d9d04] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004479576s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-420000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-420000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-420000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-420000 -n old-k8s-version-420000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-420000 -n old-k8s-version-420000: exit status 2 (227.163858ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-420000 -n old-k8s-version-420000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-420000 -n old-k8s-version-420000: exit status 2 (223.663768ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-420000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-420000 -n old-k8s-version-420000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-420000 -n old-k8s-version-420000
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.31s)

                                                
                                    

Test skip (36/326)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.1/cached-images 0
15 TestDownloadOnly/v1.30.1/binaries 0
16 TestDownloadOnly/v1.30.1/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Olm 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
115 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.02
116 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
117 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
118 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
119 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
238 TestChangeNoneUser 0
241 TestScheduledStopWindows 0
243 TestSkaffold 0
245 TestInsufficientStorage 0
249 TestMissingContainerUpgrade 0
254 TestNetworkPlugins/group/kubenet 2.69
263 TestNetworkPlugins/group/cilium 2.93
278 TestStartStop/group/disable-driver-mounts 0.15
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-053421 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-053421

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-053421

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-053421

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-053421

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-053421

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-053421

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-053421

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-053421

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-053421

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-053421

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053421"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053421"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053421"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-053421

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053421"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053421"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-053421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-053421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-053421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-053421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-053421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-053421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-053421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-053421" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053421"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053421"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053421"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053421"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053421"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-053421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-053421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-053421" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053421"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053421"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053421"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053421"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053421"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-053421

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053421"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053421"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053421"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053421"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053421"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053421"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053421"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053421"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053421"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053421"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053421"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053421"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053421"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053421"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053421"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053421"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053421"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053421"

                                                
                                                
----------------------- debugLogs end: kubenet-053421 [took: 2.556818184s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-053421" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-053421
--- SKIP: TestNetworkPlugins/group/kubenet (2.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-053421 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-053421

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-053421

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-053421

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-053421

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-053421

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-053421

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-053421

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-053421

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-053421

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-053421

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053421"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053421"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053421"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-053421

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053421"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053421"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-053421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-053421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-053421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-053421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-053421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-053421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-053421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-053421" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053421"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053421"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053421"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053421"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053421"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-053421

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-053421

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-053421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-053421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-053421

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-053421

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-053421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-053421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-053421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-053421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-053421" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053421"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053421"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053421"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053421"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053421"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-053421

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053421"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053421"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053421"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053421"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053421"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053421"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053421"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053421"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053421"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053421"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053421"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053421"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053421"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053421"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053421"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053421"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053421"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-053421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053421"

                                                
                                                
----------------------- debugLogs end: cilium-053421 [took: 2.7886176s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-053421" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-053421
--- SKIP: TestNetworkPlugins/group/cilium (2.93s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-917619" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-917619
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard