Test Report: KVM_Linux_containerd 20062

                    
                      964562641276d457941dbb6d7cf4aa7e43312d02:2024-12-10:37415
                    
                

Test fail (1/328)

Order failed test Duration
35 TestAddons/parallel/Registry 75.27
x
+
TestAddons/parallel/Registry (75.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 3.718846ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-5cc95cd69-89n6d" [856777db-c8d8-4f9f-b52e-05d4d38090b7] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.011410051s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-qjd4j" [02bc21df-ac00-4c3c-a980-e033abcac8f0] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004379801s
addons_test.go:331: (dbg) Run:  kubectl --context addons-722117 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-722117 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Non-zero exit: kubectl --context addons-722117 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.09160399s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:338: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-722117 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:342: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-722117 ip
2024/12/09 23:51:28 [DEBUG] GET http://192.168.39.28:5000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-722117 -n addons-722117
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-722117 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-722117 logs -n 25: (1.472036536s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-443803 | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC |                     |
	|         | -p download-only-443803              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 09 Dec 24 23:44 UTC | 09 Dec 24 23:44 UTC |
	| delete  | -p download-only-443803              | download-only-443803 | jenkins | v1.34.0 | 09 Dec 24 23:44 UTC | 09 Dec 24 23:44 UTC |
	| start   | -o=json --download-only              | download-only-000195 | jenkins | v1.34.0 | 09 Dec 24 23:44 UTC |                     |
	|         | -p download-only-000195              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 09 Dec 24 23:44 UTC | 09 Dec 24 23:44 UTC |
	| delete  | -p download-only-000195              | download-only-000195 | jenkins | v1.34.0 | 09 Dec 24 23:44 UTC | 09 Dec 24 23:44 UTC |
	| delete  | -p download-only-443803              | download-only-443803 | jenkins | v1.34.0 | 09 Dec 24 23:44 UTC | 09 Dec 24 23:44 UTC |
	| delete  | -p download-only-000195              | download-only-000195 | jenkins | v1.34.0 | 09 Dec 24 23:44 UTC | 09 Dec 24 23:44 UTC |
	| start   | --download-only -p                   | binary-mirror-125941 | jenkins | v1.34.0 | 09 Dec 24 23:44 UTC |                     |
	|         | binary-mirror-125941                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:33351               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-125941              | binary-mirror-125941 | jenkins | v1.34.0 | 09 Dec 24 23:44 UTC | 09 Dec 24 23:44 UTC |
	| addons  | enable dashboard -p                  | addons-722117        | jenkins | v1.34.0 | 09 Dec 24 23:44 UTC |                     |
	|         | addons-722117                        |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-722117        | jenkins | v1.34.0 | 09 Dec 24 23:44 UTC |                     |
	|         | addons-722117                        |                      |         |         |                     |                     |
	| start   | -p addons-722117 --wait=true         | addons-722117        | jenkins | v1.34.0 | 09 Dec 24 23:44 UTC | 09 Dec 24 23:49 UTC |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	| addons  | addons-722117 addons disable         | addons-722117        | jenkins | v1.34.0 | 09 Dec 24 23:49 UTC | 09 Dec 24 23:49 UTC |
	|         | volcano --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| addons  | addons-722117 addons disable         | addons-722117        | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | gcp-auth --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-722117        | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | -p addons-722117                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-722117 addons                 | addons-722117        | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | disable nvidia-device-plugin         |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-722117 addons                 | addons-722117        | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | disable metrics-server               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-722117 addons                 | addons-722117        | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | disable cloud-spanner                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-722117 addons disable         | addons-722117        | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | yakd --alsologtostderr -v=1          |                      |         |         |                     |                     |
	| ip      | addons-722117 ip                     | addons-722117        | jenkins | v1.34.0 | 09 Dec 24 23:51 UTC | 09 Dec 24 23:51 UTC |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 23:44:44
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 23:44:44.598242  317576 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:44:44.598343  317576 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:44:44.598351  317576 out.go:358] Setting ErrFile to fd 2...
	I1209 23:44:44.598356  317576 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:44:44.598539  317576 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-309592/.minikube/bin
	I1209 23:44:44.599178  317576 out.go:352] Setting JSON to false
	I1209 23:44:44.600009  317576 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":26807,"bootTime":1733761078,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 23:44:44.600069  317576 start.go:139] virtualization: kvm guest
	I1209 23:44:44.602137  317576 out.go:177] * [addons-722117] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 23:44:44.604129  317576 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 23:44:44.604143  317576 notify.go:220] Checking for updates...
	I1209 23:44:44.606771  317576 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:44:44.608182  317576 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-309592/kubeconfig
	I1209 23:44:44.609575  317576 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-309592/.minikube
	I1209 23:44:44.611018  317576 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 23:44:44.612357  317576 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 23:44:44.613850  317576 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:44:44.645952  317576 out.go:177] * Using the kvm2 driver based on user configuration
	I1209 23:44:44.647407  317576 start.go:297] selected driver: kvm2
	I1209 23:44:44.647420  317576 start.go:901] validating driver "kvm2" against <nil>
	I1209 23:44:44.647432  317576 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 23:44:44.648142  317576 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:44:44.648232  317576 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20062-309592/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 23:44:44.663241  317576 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 23:44:44.663290  317576 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 23:44:44.663565  317576 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 23:44:44.663597  317576 cni.go:84] Creating CNI manager for ""
	I1209 23:44:44.663643  317576 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1209 23:44:44.663652  317576 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 23:44:44.663695  317576 start.go:340] cluster config:
	{Name:addons-722117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-722117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
ontainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:44:44.663809  317576 iso.go:125] acquiring lock: {Name:mk653a727a207899371d18f50d4ce9d11018138a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:44:44.665764  317576 out.go:177] * Starting "addons-722117" primary control-plane node in "addons-722117" cluster
	I1209 23:44:44.667115  317576 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
	I1209 23:44:44.667148  317576 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-309592/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-amd64.tar.lz4
	I1209 23:44:44.667159  317576 cache.go:56] Caching tarball of preloaded images
	I1209 23:44:44.667271  317576 preload.go:172] Found /home/jenkins/minikube-integration/20062-309592/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1209 23:44:44.667289  317576 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on containerd
	I1209 23:44:44.667587  317576 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/config.json ...
	I1209 23:44:44.667608  317576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/config.json: {Name:mk5ab32a9e691668187c4a88462274d0826fa7fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:44.667780  317576 start.go:360] acquireMachinesLock for addons-722117: {Name:mkef0210a33b38f2348f0c409fcb91d311bb0773 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 23:44:44.667842  317576 start.go:364] duration metric: took 44.445µs to acquireMachinesLock for "addons-722117"
	I1209 23:44:44.667867  317576 start.go:93] Provisioning new machine with config: &{Name:addons-722117 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:addons-722117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1209 23:44:44.667938  317576 start.go:125] createHost starting for "" (driver="kvm2")
	I1209 23:44:44.669623  317576 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1209 23:44:44.669755  317576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:44:44.669799  317576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:44.684412  317576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39951
	I1209 23:44:44.684940  317576 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:44.685562  317576 main.go:141] libmachine: Using API Version  1
	I1209 23:44:44.685586  317576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:44.685974  317576 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:44.686165  317576 main.go:141] libmachine: (addons-722117) Calling .GetMachineName
	I1209 23:44:44.686300  317576 main.go:141] libmachine: (addons-722117) Calling .DriverName
	I1209 23:44:44.686462  317576 start.go:159] libmachine.API.Create for "addons-722117" (driver="kvm2")
	I1209 23:44:44.686498  317576 client.go:168] LocalClient.Create starting
	I1209 23:44:44.686546  317576 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20062-309592/.minikube/certs/ca.pem
	I1209 23:44:44.985559  317576 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20062-309592/.minikube/certs/cert.pem
	I1209 23:44:45.103966  317576 main.go:141] libmachine: Running pre-create checks...
	I1209 23:44:45.103994  317576 main.go:141] libmachine: (addons-722117) Calling .PreCreateCheck
	I1209 23:44:45.104547  317576 main.go:141] libmachine: (addons-722117) Calling .GetConfigRaw
	I1209 23:44:45.104981  317576 main.go:141] libmachine: Creating machine...
	I1209 23:44:45.104996  317576 main.go:141] libmachine: (addons-722117) Calling .Create
	I1209 23:44:45.105181  317576 main.go:141] libmachine: (addons-722117) Creating KVM machine...
	I1209 23:44:45.106552  317576 main.go:141] libmachine: (addons-722117) DBG | found existing default KVM network
	I1209 23:44:45.107522  317576 main.go:141] libmachine: (addons-722117) DBG | I1209 23:44:45.107321  317599 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211f0}
	I1209 23:44:45.107545  317576 main.go:141] libmachine: (addons-722117) DBG | created network xml: 
	I1209 23:44:45.107559  317576 main.go:141] libmachine: (addons-722117) DBG | <network>
	I1209 23:44:45.107566  317576 main.go:141] libmachine: (addons-722117) DBG |   <name>mk-addons-722117</name>
	I1209 23:44:45.107575  317576 main.go:141] libmachine: (addons-722117) DBG |   <dns enable='no'/>
	I1209 23:44:45.107585  317576 main.go:141] libmachine: (addons-722117) DBG |   
	I1209 23:44:45.107595  317576 main.go:141] libmachine: (addons-722117) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1209 23:44:45.107602  317576 main.go:141] libmachine: (addons-722117) DBG |     <dhcp>
	I1209 23:44:45.107610  317576 main.go:141] libmachine: (addons-722117) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1209 23:44:45.107624  317576 main.go:141] libmachine: (addons-722117) DBG |     </dhcp>
	I1209 23:44:45.107687  317576 main.go:141] libmachine: (addons-722117) DBG |   </ip>
	I1209 23:44:45.107717  317576 main.go:141] libmachine: (addons-722117) DBG |   
	I1209 23:44:45.107759  317576 main.go:141] libmachine: (addons-722117) DBG | </network>
	I1209 23:44:45.107780  317576 main.go:141] libmachine: (addons-722117) DBG | 
	I1209 23:44:45.112843  317576 main.go:141] libmachine: (addons-722117) DBG | trying to create private KVM network mk-addons-722117 192.168.39.0/24...
	I1209 23:44:45.178695  317576 main.go:141] libmachine: (addons-722117) DBG | private KVM network mk-addons-722117 192.168.39.0/24 created
	I1209 23:44:45.178728  317576 main.go:141] libmachine: (addons-722117) Setting up store path in /home/jenkins/minikube-integration/20062-309592/.minikube/machines/addons-722117 ...
	I1209 23:44:45.178750  317576 main.go:141] libmachine: (addons-722117) DBG | I1209 23:44:45.178651  317599 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20062-309592/.minikube
	I1209 23:44:45.178820  317576 main.go:141] libmachine: (addons-722117) Building disk image from file:///home/jenkins/minikube-integration/20062-309592/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 23:44:45.178860  317576 main.go:141] libmachine: (addons-722117) Downloading /home/jenkins/minikube-integration/20062-309592/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20062-309592/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1209 23:44:45.491082  317576 main.go:141] libmachine: (addons-722117) DBG | I1209 23:44:45.490872  317599 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20062-309592/.minikube/machines/addons-722117/id_rsa...
	I1209 23:44:45.655276  317576 main.go:141] libmachine: (addons-722117) DBG | I1209 23:44:45.655138  317599 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20062-309592/.minikube/machines/addons-722117/addons-722117.rawdisk...
	I1209 23:44:45.655313  317576 main.go:141] libmachine: (addons-722117) DBG | Writing magic tar header
	I1209 23:44:45.655324  317576 main.go:141] libmachine: (addons-722117) DBG | Writing SSH key tar header
	I1209 23:44:45.655331  317576 main.go:141] libmachine: (addons-722117) DBG | I1209 23:44:45.655285  317599 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20062-309592/.minikube/machines/addons-722117 ...
	I1209 23:44:45.655573  317576 main.go:141] libmachine: (addons-722117) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-309592/.minikube/machines/addons-722117
	I1209 23:44:45.655602  317576 main.go:141] libmachine: (addons-722117) Setting executable bit set on /home/jenkins/minikube-integration/20062-309592/.minikube/machines/addons-722117 (perms=drwx------)
	I1209 23:44:45.655614  317576 main.go:141] libmachine: (addons-722117) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-309592/.minikube/machines
	I1209 23:44:45.655627  317576 main.go:141] libmachine: (addons-722117) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-309592/.minikube
	I1209 23:44:45.655657  317576 main.go:141] libmachine: (addons-722117) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-309592
	I1209 23:44:45.655671  317576 main.go:141] libmachine: (addons-722117) Setting executable bit set on /home/jenkins/minikube-integration/20062-309592/.minikube/machines (perms=drwxr-xr-x)
	I1209 23:44:45.655677  317576 main.go:141] libmachine: (addons-722117) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1209 23:44:45.655685  317576 main.go:141] libmachine: (addons-722117) DBG | Checking permissions on dir: /home/jenkins
	I1209 23:44:45.655691  317576 main.go:141] libmachine: (addons-722117) DBG | Checking permissions on dir: /home
	I1209 23:44:45.655699  317576 main.go:141] libmachine: (addons-722117) DBG | Skipping /home - not owner
	I1209 23:44:45.655707  317576 main.go:141] libmachine: (addons-722117) Setting executable bit set on /home/jenkins/minikube-integration/20062-309592/.minikube (perms=drwxr-xr-x)
	I1209 23:44:45.655716  317576 main.go:141] libmachine: (addons-722117) Setting executable bit set on /home/jenkins/minikube-integration/20062-309592 (perms=drwxrwxr-x)
	I1209 23:44:45.655723  317576 main.go:141] libmachine: (addons-722117) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 23:44:45.655731  317576 main.go:141] libmachine: (addons-722117) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 23:44:45.655737  317576 main.go:141] libmachine: (addons-722117) Creating domain...
	I1209 23:44:45.656935  317576 main.go:141] libmachine: (addons-722117) define libvirt domain using xml: 
	I1209 23:44:45.656976  317576 main.go:141] libmachine: (addons-722117) <domain type='kvm'>
	I1209 23:44:45.656986  317576 main.go:141] libmachine: (addons-722117)   <name>addons-722117</name>
	I1209 23:44:45.656998  317576 main.go:141] libmachine: (addons-722117)   <memory unit='MiB'>4000</memory>
	I1209 23:44:45.657006  317576 main.go:141] libmachine: (addons-722117)   <vcpu>2</vcpu>
	I1209 23:44:45.657015  317576 main.go:141] libmachine: (addons-722117)   <features>
	I1209 23:44:45.657023  317576 main.go:141] libmachine: (addons-722117)     <acpi/>
	I1209 23:44:45.657031  317576 main.go:141] libmachine: (addons-722117)     <apic/>
	I1209 23:44:45.657039  317576 main.go:141] libmachine: (addons-722117)     <pae/>
	I1209 23:44:45.657062  317576 main.go:141] libmachine: (addons-722117)     
	I1209 23:44:45.657071  317576 main.go:141] libmachine: (addons-722117)   </features>
	I1209 23:44:45.657079  317576 main.go:141] libmachine: (addons-722117)   <cpu mode='host-passthrough'>
	I1209 23:44:45.657087  317576 main.go:141] libmachine: (addons-722117)   
	I1209 23:44:45.657098  317576 main.go:141] libmachine: (addons-722117)   </cpu>
	I1209 23:44:45.657111  317576 main.go:141] libmachine: (addons-722117)   <os>
	I1209 23:44:45.657121  317576 main.go:141] libmachine: (addons-722117)     <type>hvm</type>
	I1209 23:44:45.657130  317576 main.go:141] libmachine: (addons-722117)     <boot dev='cdrom'/>
	I1209 23:44:45.657144  317576 main.go:141] libmachine: (addons-722117)     <boot dev='hd'/>
	I1209 23:44:45.657158  317576 main.go:141] libmachine: (addons-722117)     <bootmenu enable='no'/>
	I1209 23:44:45.657167  317576 main.go:141] libmachine: (addons-722117)   </os>
	I1209 23:44:45.657177  317576 main.go:141] libmachine: (addons-722117)   <devices>
	I1209 23:44:45.657186  317576 main.go:141] libmachine: (addons-722117)     <disk type='file' device='cdrom'>
	I1209 23:44:45.657199  317576 main.go:141] libmachine: (addons-722117)       <source file='/home/jenkins/minikube-integration/20062-309592/.minikube/machines/addons-722117/boot2docker.iso'/>
	I1209 23:44:45.657215  317576 main.go:141] libmachine: (addons-722117)       <target dev='hdc' bus='scsi'/>
	I1209 23:44:45.657225  317576 main.go:141] libmachine: (addons-722117)       <readonly/>
	I1209 23:44:45.657246  317576 main.go:141] libmachine: (addons-722117)     </disk>
	I1209 23:44:45.657260  317576 main.go:141] libmachine: (addons-722117)     <disk type='file' device='disk'>
	I1209 23:44:45.657274  317576 main.go:141] libmachine: (addons-722117)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1209 23:44:45.657303  317576 main.go:141] libmachine: (addons-722117)       <source file='/home/jenkins/minikube-integration/20062-309592/.minikube/machines/addons-722117/addons-722117.rawdisk'/>
	I1209 23:44:45.657325  317576 main.go:141] libmachine: (addons-722117)       <target dev='hda' bus='virtio'/>
	I1209 23:44:45.657331  317576 main.go:141] libmachine: (addons-722117)     </disk>
	I1209 23:44:45.657339  317576 main.go:141] libmachine: (addons-722117)     <interface type='network'>
	I1209 23:44:45.657378  317576 main.go:141] libmachine: (addons-722117)       <source network='mk-addons-722117'/>
	I1209 23:44:45.657403  317576 main.go:141] libmachine: (addons-722117)       <model type='virtio'/>
	I1209 23:44:45.657415  317576 main.go:141] libmachine: (addons-722117)     </interface>
	I1209 23:44:45.657428  317576 main.go:141] libmachine: (addons-722117)     <interface type='network'>
	I1209 23:44:45.657448  317576 main.go:141] libmachine: (addons-722117)       <source network='default'/>
	I1209 23:44:45.657466  317576 main.go:141] libmachine: (addons-722117)       <model type='virtio'/>
	I1209 23:44:45.657477  317576 main.go:141] libmachine: (addons-722117)     </interface>
	I1209 23:44:45.657487  317576 main.go:141] libmachine: (addons-722117)     <serial type='pty'>
	I1209 23:44:45.657500  317576 main.go:141] libmachine: (addons-722117)       <target port='0'/>
	I1209 23:44:45.657510  317576 main.go:141] libmachine: (addons-722117)     </serial>
	I1209 23:44:45.657526  317576 main.go:141] libmachine: (addons-722117)     <console type='pty'>
	I1209 23:44:45.657541  317576 main.go:141] libmachine: (addons-722117)       <target type='serial' port='0'/>
	I1209 23:44:45.657549  317576 main.go:141] libmachine: (addons-722117)     </console>
	I1209 23:44:45.657559  317576 main.go:141] libmachine: (addons-722117)     <rng model='virtio'>
	I1209 23:44:45.657569  317576 main.go:141] libmachine: (addons-722117)       <backend model='random'>/dev/random</backend>
	I1209 23:44:45.657578  317576 main.go:141] libmachine: (addons-722117)     </rng>
	I1209 23:44:45.657586  317576 main.go:141] libmachine: (addons-722117)     
	I1209 23:44:45.657594  317576 main.go:141] libmachine: (addons-722117)     
	I1209 23:44:45.657607  317576 main.go:141] libmachine: (addons-722117)   </devices>
	I1209 23:44:45.657620  317576 main.go:141] libmachine: (addons-722117) </domain>
	I1209 23:44:45.657635  317576 main.go:141] libmachine: (addons-722117) 
	I1209 23:44:45.662047  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:26:05:52 in network default
	I1209 23:44:45.662559  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:44:45.662575  317576 main.go:141] libmachine: (addons-722117) Ensuring networks are active...
	I1209 23:44:45.663277  317576 main.go:141] libmachine: (addons-722117) Ensuring network default is active
	I1209 23:44:45.663636  317576 main.go:141] libmachine: (addons-722117) Ensuring network mk-addons-722117 is active
	I1209 23:44:45.664073  317576 main.go:141] libmachine: (addons-722117) Getting domain xml...
	I1209 23:44:45.664818  317576 main.go:141] libmachine: (addons-722117) Creating domain...
	I1209 23:44:46.855912  317576 main.go:141] libmachine: (addons-722117) Waiting to get IP...
	I1209 23:44:46.856745  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:44:46.857100  317576 main.go:141] libmachine: (addons-722117) DBG | unable to find current IP address of domain addons-722117 in network mk-addons-722117
	I1209 23:44:46.857146  317576 main.go:141] libmachine: (addons-722117) DBG | I1209 23:44:46.857090  317599 retry.go:31] will retry after 210.063709ms: waiting for machine to come up
	I1209 23:44:47.068545  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:44:47.069023  317576 main.go:141] libmachine: (addons-722117) DBG | unable to find current IP address of domain addons-722117 in network mk-addons-722117
	I1209 23:44:47.069049  317576 main.go:141] libmachine: (addons-722117) DBG | I1209 23:44:47.068982  317599 retry.go:31] will retry after 257.876554ms: waiting for machine to come up
	I1209 23:44:47.328768  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:44:47.329247  317576 main.go:141] libmachine: (addons-722117) DBG | unable to find current IP address of domain addons-722117 in network mk-addons-722117
	I1209 23:44:47.329295  317576 main.go:141] libmachine: (addons-722117) DBG | I1209 23:44:47.329217  317599 retry.go:31] will retry after 324.118908ms: waiting for machine to come up
	I1209 23:44:47.654897  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:44:47.655350  317576 main.go:141] libmachine: (addons-722117) DBG | unable to find current IP address of domain addons-722117 in network mk-addons-722117
	I1209 23:44:47.655382  317576 main.go:141] libmachine: (addons-722117) DBG | I1209 23:44:47.655302  317599 retry.go:31] will retry after 455.601443ms: waiting for machine to come up
	I1209 23:44:48.112916  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:44:48.113279  317576 main.go:141] libmachine: (addons-722117) DBG | unable to find current IP address of domain addons-722117 in network mk-addons-722117
	I1209 23:44:48.113315  317576 main.go:141] libmachine: (addons-722117) DBG | I1209 23:44:48.113217  317599 retry.go:31] will retry after 476.73789ms: waiting for machine to come up
	I1209 23:44:48.591996  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:44:48.592511  317576 main.go:141] libmachine: (addons-722117) DBG | unable to find current IP address of domain addons-722117 in network mk-addons-722117
	I1209 23:44:48.592537  317576 main.go:141] libmachine: (addons-722117) DBG | I1209 23:44:48.592475  317599 retry.go:31] will retry after 776.605222ms: waiting for machine to come up
	I1209 23:44:49.370257  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:44:49.370563  317576 main.go:141] libmachine: (addons-722117) DBG | unable to find current IP address of domain addons-722117 in network mk-addons-722117
	I1209 23:44:49.370591  317576 main.go:141] libmachine: (addons-722117) DBG | I1209 23:44:49.370528  317599 retry.go:31] will retry after 982.758023ms: waiting for machine to come up
	I1209 23:44:50.355314  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:44:50.355670  317576 main.go:141] libmachine: (addons-722117) DBG | unable to find current IP address of domain addons-722117 in network mk-addons-722117
	I1209 23:44:50.355700  317576 main.go:141] libmachine: (addons-722117) DBG | I1209 23:44:50.355634  317599 retry.go:31] will retry after 1.425747125s: waiting for machine to come up
	I1209 23:44:51.783606  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:44:51.784065  317576 main.go:141] libmachine: (addons-722117) DBG | unable to find current IP address of domain addons-722117 in network mk-addons-722117
	I1209 23:44:51.784089  317576 main.go:141] libmachine: (addons-722117) DBG | I1209 23:44:51.784036  317599 retry.go:31] will retry after 1.823813919s: waiting for machine to come up
	I1209 23:44:53.609995  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:44:53.610366  317576 main.go:141] libmachine: (addons-722117) DBG | unable to find current IP address of domain addons-722117 in network mk-addons-722117
	I1209 23:44:53.610401  317576 main.go:141] libmachine: (addons-722117) DBG | I1209 23:44:53.610310  317599 retry.go:31] will retry after 2.197511718s: waiting for machine to come up
	I1209 23:44:55.809597  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:44:55.809956  317576 main.go:141] libmachine: (addons-722117) DBG | unable to find current IP address of domain addons-722117 in network mk-addons-722117
	I1209 23:44:55.809982  317576 main.go:141] libmachine: (addons-722117) DBG | I1209 23:44:55.809915  317599 retry.go:31] will retry after 2.002671748s: waiting for machine to come up
	I1209 23:44:57.815018  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:44:57.815460  317576 main.go:141] libmachine: (addons-722117) DBG | unable to find current IP address of domain addons-722117 in network mk-addons-722117
	I1209 23:44:57.815491  317576 main.go:141] libmachine: (addons-722117) DBG | I1209 23:44:57.815396  317599 retry.go:31] will retry after 2.537694166s: waiting for machine to come up
	I1209 23:45:00.354477  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:00.354874  317576 main.go:141] libmachine: (addons-722117) DBG | unable to find current IP address of domain addons-722117 in network mk-addons-722117
	I1209 23:45:00.354908  317576 main.go:141] libmachine: (addons-722117) DBG | I1209 23:45:00.354818  317599 retry.go:31] will retry after 4.394396648s: waiting for machine to come up
	I1209 23:45:04.751268  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:04.751650  317576 main.go:141] libmachine: (addons-722117) DBG | unable to find current IP address of domain addons-722117 in network mk-addons-722117
	I1209 23:45:04.751672  317576 main.go:141] libmachine: (addons-722117) DBG | I1209 23:45:04.751613  317599 retry.go:31] will retry after 5.153461533s: waiting for machine to come up
	I1209 23:45:09.906461  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:09.906885  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has current primary IP address 192.168.39.28 and MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:09.906904  317576 main.go:141] libmachine: (addons-722117) Found IP for machine: 192.168.39.28
	I1209 23:45:09.906913  317576 main.go:141] libmachine: (addons-722117) Reserving static IP address...
	I1209 23:45:09.907224  317576 main.go:141] libmachine: (addons-722117) DBG | unable to find host DHCP lease matching {name: "addons-722117", mac: "52:54:00:fe:b8:62", ip: "192.168.39.28"} in network mk-addons-722117
	I1209 23:45:09.983994  317576 main.go:141] libmachine: (addons-722117) Reserved static IP address: 192.168.39.28
	I1209 23:45:09.984038  317576 main.go:141] libmachine: (addons-722117) DBG | Getting to WaitForSSH function...
	I1209 23:45:09.984047  317576 main.go:141] libmachine: (addons-722117) Waiting for SSH to be available...
	I1209 23:45:09.986802  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:09.987268  317576 main.go:141] libmachine: (addons-722117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:b8:62", ip: ""} in network mk-addons-722117: {Iface:virbr1 ExpiryTime:2024-12-10 00:45:00 +0000 UTC Type:0 Mac:52:54:00:fe:b8:62 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fe:b8:62}
	I1209 23:45:09.987297  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined IP address 192.168.39.28 and MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:09.987479  317576 main.go:141] libmachine: (addons-722117) DBG | Using SSH client type: external
	I1209 23:45:09.987526  317576 main.go:141] libmachine: (addons-722117) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-309592/.minikube/machines/addons-722117/id_rsa (-rw-------)
	I1209 23:45:09.987561  317576 main.go:141] libmachine: (addons-722117) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.28 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-309592/.minikube/machines/addons-722117/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 23:45:09.987575  317576 main.go:141] libmachine: (addons-722117) DBG | About to run SSH command:
	I1209 23:45:09.987587  317576 main.go:141] libmachine: (addons-722117) DBG | exit 0
	I1209 23:45:10.115390  317576 main.go:141] libmachine: (addons-722117) DBG | SSH cmd err, output: <nil>: 
	I1209 23:45:10.115625  317576 main.go:141] libmachine: (addons-722117) KVM machine creation complete!
	I1209 23:45:10.115996  317576 main.go:141] libmachine: (addons-722117) Calling .GetConfigRaw
	I1209 23:45:10.116622  317576 main.go:141] libmachine: (addons-722117) Calling .DriverName
	I1209 23:45:10.116785  317576 main.go:141] libmachine: (addons-722117) Calling .DriverName
	I1209 23:45:10.116975  317576 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 23:45:10.116990  317576 main.go:141] libmachine: (addons-722117) Calling .GetState
	I1209 23:45:10.118155  317576 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 23:45:10.118189  317576 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 23:45:10.118195  317576 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 23:45:10.118201  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHHostname
	I1209 23:45:10.120639  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:10.121039  317576 main.go:141] libmachine: (addons-722117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:b8:62", ip: ""} in network mk-addons-722117: {Iface:virbr1 ExpiryTime:2024-12-10 00:45:00 +0000 UTC Type:0 Mac:52:54:00:fe:b8:62 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-722117 Clientid:01:52:54:00:fe:b8:62}
	I1209 23:45:10.121064  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined IP address 192.168.39.28 and MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:10.121155  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHPort
	I1209 23:45:10.121315  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHKeyPath
	I1209 23:45:10.121483  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHKeyPath
	I1209 23:45:10.121594  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHUsername
	I1209 23:45:10.121719  317576 main.go:141] libmachine: Using SSH client type: native
	I1209 23:45:10.121968  317576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I1209 23:45:10.121987  317576 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 23:45:10.230619  317576 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:45:10.230649  317576 main.go:141] libmachine: Detecting the provisioner...
	I1209 23:45:10.230657  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHHostname
	I1209 23:45:10.233608  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:10.234000  317576 main.go:141] libmachine: (addons-722117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:b8:62", ip: ""} in network mk-addons-722117: {Iface:virbr1 ExpiryTime:2024-12-10 00:45:00 +0000 UTC Type:0 Mac:52:54:00:fe:b8:62 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-722117 Clientid:01:52:54:00:fe:b8:62}
	I1209 23:45:10.234038  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined IP address 192.168.39.28 and MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:10.234167  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHPort
	I1209 23:45:10.234375  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHKeyPath
	I1209 23:45:10.234512  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHKeyPath
	I1209 23:45:10.234639  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHUsername
	I1209 23:45:10.234805  317576 main.go:141] libmachine: Using SSH client type: native
	I1209 23:45:10.234976  317576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I1209 23:45:10.234987  317576 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 23:45:10.347939  317576 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 23:45:10.348058  317576 main.go:141] libmachine: found compatible host: buildroot
	I1209 23:45:10.348073  317576 main.go:141] libmachine: Provisioning with buildroot...
	I1209 23:45:10.348086  317576 main.go:141] libmachine: (addons-722117) Calling .GetMachineName
	I1209 23:45:10.348475  317576 buildroot.go:166] provisioning hostname "addons-722117"
	I1209 23:45:10.348523  317576 main.go:141] libmachine: (addons-722117) Calling .GetMachineName
	I1209 23:45:10.348733  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHHostname
	I1209 23:45:10.351357  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:10.351701  317576 main.go:141] libmachine: (addons-722117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:b8:62", ip: ""} in network mk-addons-722117: {Iface:virbr1 ExpiryTime:2024-12-10 00:45:00 +0000 UTC Type:0 Mac:52:54:00:fe:b8:62 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-722117 Clientid:01:52:54:00:fe:b8:62}
	I1209 23:45:10.351723  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined IP address 192.168.39.28 and MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:10.351898  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHPort
	I1209 23:45:10.352104  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHKeyPath
	I1209 23:45:10.352243  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHKeyPath
	I1209 23:45:10.352385  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHUsername
	I1209 23:45:10.352516  317576 main.go:141] libmachine: Using SSH client type: native
	I1209 23:45:10.352713  317576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I1209 23:45:10.352733  317576 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-722117 && echo "addons-722117" | sudo tee /etc/hostname
	I1209 23:45:10.478897  317576 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-722117
	
	I1209 23:45:10.478937  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHHostname
	I1209 23:45:10.481742  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:10.482086  317576 main.go:141] libmachine: (addons-722117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:b8:62", ip: ""} in network mk-addons-722117: {Iface:virbr1 ExpiryTime:2024-12-10 00:45:00 +0000 UTC Type:0 Mac:52:54:00:fe:b8:62 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-722117 Clientid:01:52:54:00:fe:b8:62}
	I1209 23:45:10.482112  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined IP address 192.168.39.28 and MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:10.482237  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHPort
	I1209 23:45:10.482451  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHKeyPath
	I1209 23:45:10.482606  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHKeyPath
	I1209 23:45:10.482768  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHUsername
	I1209 23:45:10.482967  317576 main.go:141] libmachine: Using SSH client type: native
	I1209 23:45:10.483208  317576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I1209 23:45:10.483233  317576 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-722117' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-722117/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-722117' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:45:10.600740  317576 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:45:10.600780  317576 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-309592/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-309592/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-309592/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-309592/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-309592/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-309592/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-309592/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-309592/.minikube}
	I1209 23:45:10.600821  317576 buildroot.go:174] setting up certificates
	I1209 23:45:10.600833  317576 provision.go:84] configureAuth start
	I1209 23:45:10.600847  317576 main.go:141] libmachine: (addons-722117) Calling .GetMachineName
	I1209 23:45:10.601160  317576 main.go:141] libmachine: (addons-722117) Calling .GetIP
	I1209 23:45:10.604120  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:10.604558  317576 main.go:141] libmachine: (addons-722117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:b8:62", ip: ""} in network mk-addons-722117: {Iface:virbr1 ExpiryTime:2024-12-10 00:45:00 +0000 UTC Type:0 Mac:52:54:00:fe:b8:62 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-722117 Clientid:01:52:54:00:fe:b8:62}
	I1209 23:45:10.604665  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined IP address 192.168.39.28 and MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:10.604809  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHHostname
	I1209 23:45:10.607123  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:10.607437  317576 main.go:141] libmachine: (addons-722117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:b8:62", ip: ""} in network mk-addons-722117: {Iface:virbr1 ExpiryTime:2024-12-10 00:45:00 +0000 UTC Type:0 Mac:52:54:00:fe:b8:62 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-722117 Clientid:01:52:54:00:fe:b8:62}
	I1209 23:45:10.607463  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined IP address 192.168.39.28 and MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:10.607636  317576 provision.go:143] copyHostCerts
	I1209 23:45:10.607721  317576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-309592/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-309592/.minikube/cert.pem (1123 bytes)
	I1209 23:45:10.607888  317576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-309592/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-309592/.minikube/key.pem (1679 bytes)
	I1209 23:45:10.607975  317576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-309592/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-309592/.minikube/ca.pem (1082 bytes)
	I1209 23:45:10.608044  317576 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-309592/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-309592/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-309592/.minikube/certs/ca-key.pem org=jenkins.addons-722117 san=[127.0.0.1 192.168.39.28 addons-722117 localhost minikube]
	I1209 23:45:10.705576  317576 provision.go:177] copyRemoteCerts
	I1209 23:45:10.705643  317576 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:45:10.705671  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHHostname
	I1209 23:45:10.708571  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:10.708867  317576 main.go:141] libmachine: (addons-722117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:b8:62", ip: ""} in network mk-addons-722117: {Iface:virbr1 ExpiryTime:2024-12-10 00:45:00 +0000 UTC Type:0 Mac:52:54:00:fe:b8:62 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-722117 Clientid:01:52:54:00:fe:b8:62}
	I1209 23:45:10.708897  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined IP address 192.168.39.28 and MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:10.709126  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHPort
	I1209 23:45:10.709318  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHKeyPath
	I1209 23:45:10.709446  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHUsername
	I1209 23:45:10.709591  317576 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-309592/.minikube/machines/addons-722117/id_rsa Username:docker}
	I1209 23:45:10.793745  317576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-309592/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 23:45:10.820028  317576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-309592/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 23:45:10.845046  317576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-309592/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 23:45:10.870464  317576 provision.go:87] duration metric: took 269.611204ms to configureAuth
	I1209 23:45:10.870497  317576 buildroot.go:189] setting minikube options for container-runtime
	I1209 23:45:10.870728  317576 config.go:182] Loaded profile config "addons-722117": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1209 23:45:10.870762  317576 main.go:141] libmachine: Checking connection to Docker...
	I1209 23:45:10.870780  317576 main.go:141] libmachine: (addons-722117) Calling .GetURL
	I1209 23:45:10.872031  317576 main.go:141] libmachine: (addons-722117) DBG | Using libvirt version 6000000
	I1209 23:45:10.874281  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:10.874592  317576 main.go:141] libmachine: (addons-722117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:b8:62", ip: ""} in network mk-addons-722117: {Iface:virbr1 ExpiryTime:2024-12-10 00:45:00 +0000 UTC Type:0 Mac:52:54:00:fe:b8:62 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-722117 Clientid:01:52:54:00:fe:b8:62}
	I1209 23:45:10.874622  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined IP address 192.168.39.28 and MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:10.874819  317576 main.go:141] libmachine: Docker is up and running!
	I1209 23:45:10.874835  317576 main.go:141] libmachine: Reticulating splines...
	I1209 23:45:10.874845  317576 client.go:171] duration metric: took 26.188333409s to LocalClient.Create
	I1209 23:45:10.874879  317576 start.go:167] duration metric: took 26.188416503s to libmachine.API.Create "addons-722117"
	I1209 23:45:10.874893  317576 start.go:293] postStartSetup for "addons-722117" (driver="kvm2")
	I1209 23:45:10.874904  317576 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:45:10.874925  317576 main.go:141] libmachine: (addons-722117) Calling .DriverName
	I1209 23:45:10.875184  317576 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:45:10.875212  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHHostname
	I1209 23:45:10.877320  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:10.877617  317576 main.go:141] libmachine: (addons-722117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:b8:62", ip: ""} in network mk-addons-722117: {Iface:virbr1 ExpiryTime:2024-12-10 00:45:00 +0000 UTC Type:0 Mac:52:54:00:fe:b8:62 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-722117 Clientid:01:52:54:00:fe:b8:62}
	I1209 23:45:10.877637  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined IP address 192.168.39.28 and MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:10.877807  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHPort
	I1209 23:45:10.878020  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHKeyPath
	I1209 23:45:10.878185  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHUsername
	I1209 23:45:10.878327  317576 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-309592/.minikube/machines/addons-722117/id_rsa Username:docker}
	I1209 23:45:10.962036  317576 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:45:10.966903  317576 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 23:45:10.966944  317576 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-309592/.minikube/addons for local assets ...
	I1209 23:45:10.967059  317576 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-309592/.minikube/files for local assets ...
	I1209 23:45:10.967097  317576 start.go:296] duration metric: took 92.197187ms for postStartSetup
	I1209 23:45:10.967143  317576 main.go:141] libmachine: (addons-722117) Calling .GetConfigRaw
	I1209 23:45:10.967759  317576 main.go:141] libmachine: (addons-722117) Calling .GetIP
	I1209 23:45:10.970229  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:10.970542  317576 main.go:141] libmachine: (addons-722117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:b8:62", ip: ""} in network mk-addons-722117: {Iface:virbr1 ExpiryTime:2024-12-10 00:45:00 +0000 UTC Type:0 Mac:52:54:00:fe:b8:62 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-722117 Clientid:01:52:54:00:fe:b8:62}
	I1209 23:45:10.970570  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined IP address 192.168.39.28 and MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:10.970812  317576 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/config.json ...
	I1209 23:45:10.970995  317576 start.go:128] duration metric: took 26.303045531s to createHost
	I1209 23:45:10.971019  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHHostname
	I1209 23:45:10.973092  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:10.973428  317576 main.go:141] libmachine: (addons-722117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:b8:62", ip: ""} in network mk-addons-722117: {Iface:virbr1 ExpiryTime:2024-12-10 00:45:00 +0000 UTC Type:0 Mac:52:54:00:fe:b8:62 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-722117 Clientid:01:52:54:00:fe:b8:62}
	I1209 23:45:10.973458  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined IP address 192.168.39.28 and MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:10.973575  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHPort
	I1209 23:45:10.973788  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHKeyPath
	I1209 23:45:10.973961  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHKeyPath
	I1209 23:45:10.974103  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHUsername
	I1209 23:45:10.974279  317576 main.go:141] libmachine: Using SSH client type: native
	I1209 23:45:10.974451  317576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I1209 23:45:10.974461  317576 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 23:45:11.084177  317576 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733787911.059819445
	
	I1209 23:45:11.084202  317576 fix.go:216] guest clock: 1733787911.059819445
	I1209 23:45:11.084210  317576 fix.go:229] Guest: 2024-12-09 23:45:11.059819445 +0000 UTC Remote: 2024-12-09 23:45:10.971007231 +0000 UTC m=+26.411333683 (delta=88.812214ms)
	I1209 23:45:11.084254  317576 fix.go:200] guest clock delta is within tolerance: 88.812214ms
	I1209 23:45:11.084261  317576 start.go:83] releasing machines lock for "addons-722117", held for 26.416405809s
	I1209 23:45:11.084303  317576 main.go:141] libmachine: (addons-722117) Calling .DriverName
	I1209 23:45:11.084602  317576 main.go:141] libmachine: (addons-722117) Calling .GetIP
	I1209 23:45:11.087115  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:11.087457  317576 main.go:141] libmachine: (addons-722117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:b8:62", ip: ""} in network mk-addons-722117: {Iface:virbr1 ExpiryTime:2024-12-10 00:45:00 +0000 UTC Type:0 Mac:52:54:00:fe:b8:62 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-722117 Clientid:01:52:54:00:fe:b8:62}
	I1209 23:45:11.087489  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined IP address 192.168.39.28 and MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:11.087624  317576 main.go:141] libmachine: (addons-722117) Calling .DriverName
	I1209 23:45:11.088151  317576 main.go:141] libmachine: (addons-722117) Calling .DriverName
	I1209 23:45:11.088332  317576 main.go:141] libmachine: (addons-722117) Calling .DriverName
	I1209 23:45:11.088442  317576 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:45:11.088497  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHHostname
	I1209 23:45:11.088560  317576 ssh_runner.go:195] Run: cat /version.json
	I1209 23:45:11.088590  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHHostname
	I1209 23:45:11.091002  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:11.091314  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:11.091395  317576 main.go:141] libmachine: (addons-722117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:b8:62", ip: ""} in network mk-addons-722117: {Iface:virbr1 ExpiryTime:2024-12-10 00:45:00 +0000 UTC Type:0 Mac:52:54:00:fe:b8:62 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-722117 Clientid:01:52:54:00:fe:b8:62}
	I1209 23:45:11.091424  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined IP address 192.168.39.28 and MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:11.091592  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHPort
	I1209 23:45:11.091719  317576 main.go:141] libmachine: (addons-722117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:b8:62", ip: ""} in network mk-addons-722117: {Iface:virbr1 ExpiryTime:2024-12-10 00:45:00 +0000 UTC Type:0 Mac:52:54:00:fe:b8:62 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-722117 Clientid:01:52:54:00:fe:b8:62}
	I1209 23:45:11.091747  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined IP address 192.168.39.28 and MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:11.091759  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHKeyPath
	I1209 23:45:11.091927  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHUsername
	I1209 23:45:11.091956  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHPort
	I1209 23:45:11.092086  317576 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-309592/.minikube/machines/addons-722117/id_rsa Username:docker}
	I1209 23:45:11.092097  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHKeyPath
	I1209 23:45:11.092303  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHUsername
	I1209 23:45:11.092482  317576 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-309592/.minikube/machines/addons-722117/id_rsa Username:docker}
	I1209 23:45:11.172644  317576 ssh_runner.go:195] Run: systemctl --version
	I1209 23:45:11.197890  317576 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 23:45:11.204130  317576 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 23:45:11.204212  317576 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:45:11.221930  317576 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 23:45:11.221966  317576 start.go:495] detecting cgroup driver to use...
	I1209 23:45:11.222047  317576 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1209 23:45:11.254680  317576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 23:45:11.269949  317576 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:45:11.270036  317576 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:45:11.285052  317576 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:45:11.300422  317576 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:45:11.423744  317576 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:45:11.565063  317576 docker.go:233] disabling docker service ...
	I1209 23:45:11.565156  317576 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:45:11.580580  317576 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:45:11.594411  317576 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:45:11.736833  317576 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:45:11.846534  317576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:45:11.861061  317576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:45:11.880797  317576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1209 23:45:11.892145  317576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 23:45:11.903087  317576 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 23:45:11.903176  317576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 23:45:11.914292  317576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 23:45:11.925283  317576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 23:45:11.936488  317576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 23:45:11.947872  317576 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:45:11.959001  317576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 23:45:11.970135  317576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 23:45:11.981420  317576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 23:45:11.993101  317576 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:45:12.003276  317576 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 23:45:12.003372  317576 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 23:45:12.017100  317576 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:45:12.027637  317576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:45:12.134380  317576 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 23:45:12.164813  317576 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1209 23:45:12.164919  317576 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1209 23:45:12.169852  317576 retry.go:31] will retry after 591.217754ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1209 23:45:12.761641  317576 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1209 23:45:12.767298  317576 start.go:563] Will wait 60s for crictl version
	I1209 23:45:12.767405  317576 ssh_runner.go:195] Run: which crictl
	I1209 23:45:12.771327  317576 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:45:12.808775  317576 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I1209 23:45:12.808870  317576 ssh_runner.go:195] Run: containerd --version
	I1209 23:45:12.837845  317576 ssh_runner.go:195] Run: containerd --version
	I1209 23:45:12.866195  317576 out.go:177] * Preparing Kubernetes v1.31.2 on containerd 1.7.23 ...
	I1209 23:45:12.867652  317576 main.go:141] libmachine: (addons-722117) Calling .GetIP
	I1209 23:45:12.870348  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:12.870638  317576 main.go:141] libmachine: (addons-722117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:b8:62", ip: ""} in network mk-addons-722117: {Iface:virbr1 ExpiryTime:2024-12-10 00:45:00 +0000 UTC Type:0 Mac:52:54:00:fe:b8:62 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-722117 Clientid:01:52:54:00:fe:b8:62}
	I1209 23:45:12.870671  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined IP address 192.168.39.28 and MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:12.870906  317576 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 23:45:12.875272  317576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:45:12.888706  317576 kubeadm.go:883] updating cluster {Name:addons-722117 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:addons-722117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.28 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:45:12.888845  317576 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
	I1209 23:45:12.888930  317576 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:45:12.921831  317576 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 23:45:12.921919  317576 ssh_runner.go:195] Run: which lz4
	I1209 23:45:12.926127  317576 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 23:45:12.930379  317576 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 23:45:12.930418  317576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-309592/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (391520817 bytes)
	I1209 23:45:14.310237  317576 containerd.go:563] duration metric: took 1.384154571s to copy over tarball
	I1209 23:45:14.310345  317576 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 23:45:16.435194  317576 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.124811116s)
	I1209 23:45:16.435227  317576 containerd.go:570] duration metric: took 2.124954109s to extract the tarball
	I1209 23:45:16.435237  317576 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 23:45:16.472822  317576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:45:16.584745  317576 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 23:45:16.616714  317576 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:45:16.678589  317576 retry.go:31] will retry after 202.821018ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-12-09T23:45:16Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I1209 23:45:16.882119  317576 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:45:16.920404  317576 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 23:45:16.920434  317576 cache_images.go:84] Images are preloaded, skipping loading
	I1209 23:45:16.920447  317576 kubeadm.go:934] updating node { 192.168.39.28 8443 v1.31.2 containerd true true} ...
	I1209 23:45:16.920579  317576 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-722117 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.28
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-722117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 23:45:16.920653  317576 ssh_runner.go:195] Run: sudo crictl info
	I1209 23:45:16.954599  317576 cni.go:84] Creating CNI manager for ""
	I1209 23:45:16.954626  317576 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1209 23:45:16.954640  317576 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 23:45:16.954663  317576 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.28 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-722117 NodeName:addons-722117 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.28"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.28 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 23:45:16.954811  317576 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.28
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-722117"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.28"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.28"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 23:45:16.954888  317576 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 23:45:16.965245  317576 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 23:45:16.965326  317576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 23:45:16.975306  317576 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1209 23:45:16.993292  317576 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 23:45:17.011451  317576 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2306 bytes)
	I1209 23:45:17.029628  317576 ssh_runner.go:195] Run: grep 192.168.39.28	control-plane.minikube.internal$ /etc/hosts
	I1209 23:45:17.033787  317576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.28	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:45:17.046525  317576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:45:17.161172  317576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:45:17.183164  317576 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117 for IP: 192.168.39.28
	I1209 23:45:17.183192  317576 certs.go:194] generating shared ca certs ...
	I1209 23:45:17.183210  317576 certs.go:226] acquiring lock for ca certs: {Name:mkd03c2697c12765fe7f35296812e835e9bf5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:45:17.183383  317576 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-309592/.minikube/ca.key
	I1209 23:45:17.303173  317576 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-309592/.minikube/ca.crt ...
	I1209 23:45:17.303208  317576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-309592/.minikube/ca.crt: {Name:mk4771d77a0ec5e64f08f6b74ec7eabfa8d25210 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:45:17.303419  317576 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-309592/.minikube/ca.key ...
	I1209 23:45:17.303439  317576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-309592/.minikube/ca.key: {Name:mk86d68f7a9141cdf8e236109bea64b890850177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:45:17.303546  317576 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-309592/.minikube/proxy-client-ca.key
	I1209 23:45:17.527960  317576 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-309592/.minikube/proxy-client-ca.crt ...
	I1209 23:45:17.527996  317576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-309592/.minikube/proxy-client-ca.crt: {Name:mk1f8c40d11b4f8969cdde2b078c92024abf0ae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:45:17.528224  317576 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-309592/.minikube/proxy-client-ca.key ...
	I1209 23:45:17.528242  317576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-309592/.minikube/proxy-client-ca.key: {Name:mk8c54012aa1d696839c089a6784f86033b44835 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:45:17.528352  317576 certs.go:256] generating profile certs ...
	I1209 23:45:17.528435  317576 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.key
	I1209 23:45:17.528453  317576 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.crt with IP's: []
	I1209 23:45:17.767624  317576 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.crt ...
	I1209 23:45:17.767660  317576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.crt: {Name:mk477be7553d07ae3ca292427ca6010b2d5c68e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:45:17.767873  317576 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.key ...
	I1209 23:45:17.767891  317576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.key: {Name:mk725d5cacccd5c53aa9c17ec9d8b7080c20821b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:45:17.768004  317576 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/apiserver.key.cb3f7bd0
	I1209 23:45:17.768033  317576 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/apiserver.crt.cb3f7bd0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.28]
	I1209 23:45:17.930881  317576 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/apiserver.crt.cb3f7bd0 ...
	I1209 23:45:17.930923  317576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/apiserver.crt.cb3f7bd0: {Name:mk4969979bafb9d83c4d8c30efe820daf37f5fe9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:45:17.931127  317576 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/apiserver.key.cb3f7bd0 ...
	I1209 23:45:17.931146  317576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/apiserver.key.cb3f7bd0: {Name:mk310cf830a958eff6f87250d39b5e04902010a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:45:17.931229  317576 certs.go:381] copying /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/apiserver.crt.cb3f7bd0 -> /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/apiserver.crt
	I1209 23:45:17.931310  317576 certs.go:385] copying /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/apiserver.key.cb3f7bd0 -> /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/apiserver.key
	I1209 23:45:17.931356  317576 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/proxy-client.key
	I1209 23:45:17.931374  317576 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/proxy-client.crt with IP's: []
	I1209 23:45:18.029403  317576 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/proxy-client.crt ...
	I1209 23:45:18.029439  317576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/proxy-client.crt: {Name:mke340630761213df774f561e4bd6a1205e12014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:45:18.029601  317576 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/proxy-client.key ...
	I1209 23:45:18.029614  317576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/proxy-client.key: {Name:mk23999f294ba575f401ed3d7439ad098c8c238c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:45:18.029789  317576 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-309592/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 23:45:18.029827  317576 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-309592/.minikube/certs/ca.pem (1082 bytes)
	I1209 23:45:18.029851  317576 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-309592/.minikube/certs/cert.pem (1123 bytes)
	I1209 23:45:18.029876  317576 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-309592/.minikube/certs/key.pem (1679 bytes)
	I1209 23:45:18.030505  317576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-309592/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 23:45:18.058321  317576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-309592/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 23:45:18.084406  317576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-309592/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 23:45:18.110532  317576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-309592/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 23:45:18.136312  317576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1209 23:45:18.161914  317576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 23:45:18.187608  317576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 23:45:18.214295  317576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 23:45:18.241534  317576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-309592/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 23:45:18.267397  317576 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 23:45:18.285669  317576 ssh_runner.go:195] Run: openssl version
	I1209 23:45:18.292058  317576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 23:45:18.304646  317576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:45:18.309925  317576 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:45 /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:45:18.309988  317576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:45:18.316549  317576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 23:45:18.328898  317576 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 23:45:18.333788  317576 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 23:45:18.333874  317576 kubeadm.go:392] StartCluster: {Name:addons-722117 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:addons-722117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.28 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:45:18.333965  317576 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1209 23:45:18.334055  317576 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:45:18.376787  317576 cri.go:89] found id: ""
	I1209 23:45:18.376888  317576 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 23:45:18.388215  317576 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 23:45:18.398662  317576 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 23:45:18.408988  317576 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 23:45:18.409019  317576 kubeadm.go:157] found existing configuration files:
	
	I1209 23:45:18.409078  317576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 23:45:18.419485  317576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 23:45:18.419551  317576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 23:45:18.430110  317576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 23:45:18.440139  317576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 23:45:18.440219  317576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 23:45:18.450132  317576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 23:45:18.459929  317576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 23:45:18.460016  317576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 23:45:18.469939  317576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 23:45:18.480386  317576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 23:45:18.480448  317576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 23:45:18.490906  317576 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 23:45:18.548459  317576 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1209 23:45:18.548557  317576 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 23:45:18.673760  317576 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 23:45:18.673923  317576 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 23:45:18.674076  317576 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 23:45:18.684224  317576 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 23:45:18.806692  317576 out.go:235]   - Generating certificates and keys ...
	I1209 23:45:18.806835  317576 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 23:45:18.806918  317576 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 23:45:18.821149  317576 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 23:45:19.067324  317576 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1209 23:45:19.160306  317576 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1209 23:45:19.622925  317576 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1209 23:45:19.737828  317576 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1209 23:45:19.737982  317576 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-722117 localhost] and IPs [192.168.39.28 127.0.0.1 ::1]
	I1209 23:45:19.902635  317576 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1209 23:45:19.902856  317576 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-722117 localhost] and IPs [192.168.39.28 127.0.0.1 ::1]
	I1209 23:45:20.091125  317576 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 23:45:20.393077  317576 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 23:45:20.573238  317576 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1209 23:45:20.573319  317576 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 23:45:20.733899  317576 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 23:45:20.840769  317576 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 23:45:20.974060  317576 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 23:45:21.078940  317576 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 23:45:21.231109  317576 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 23:45:21.231705  317576 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 23:45:21.234255  317576 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 23:45:21.236127  317576 out.go:235]   - Booting up control plane ...
	I1209 23:45:21.236270  317576 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 23:45:21.236374  317576 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 23:45:21.236479  317576 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 23:45:21.252259  317576 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 23:45:21.258209  317576 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 23:45:21.258253  317576 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 23:45:21.389886  317576 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 23:45:21.390068  317576 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 23:45:21.890787  317576 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.522281ms
	I1209 23:45:21.890932  317576 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1209 23:45:26.890326  317576 kubeadm.go:310] [api-check] The API server is healthy after 5.002061962s
	I1209 23:45:26.903924  317576 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 23:45:26.933181  317576 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 23:45:26.974556  317576 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 23:45:26.974765  317576 kubeadm.go:310] [mark-control-plane] Marking the node addons-722117 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 23:45:26.995904  317576 kubeadm.go:310] [bootstrap-token] Using token: t0i6y5.lu9zk5y4besjkjjw
	I1209 23:45:26.997393  317576 out.go:235]   - Configuring RBAC rules ...
	I1209 23:45:26.997541  317576 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 23:45:27.008215  317576 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 23:45:27.031155  317576 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 23:45:27.037508  317576 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 23:45:27.046577  317576 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 23:45:27.052392  317576 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 23:45:27.298524  317576 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 23:45:27.752835  317576 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 23:45:28.300702  317576 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 23:45:28.301754  317576 kubeadm.go:310] 
	I1209 23:45:28.301838  317576 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 23:45:28.301848  317576 kubeadm.go:310] 
	I1209 23:45:28.301939  317576 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 23:45:28.301949  317576 kubeadm.go:310] 
	I1209 23:45:28.301985  317576 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 23:45:28.302112  317576 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 23:45:28.302195  317576 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 23:45:28.302208  317576 kubeadm.go:310] 
	I1209 23:45:28.302277  317576 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 23:45:28.302288  317576 kubeadm.go:310] 
	I1209 23:45:28.302364  317576 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 23:45:28.302377  317576 kubeadm.go:310] 
	I1209 23:45:28.302457  317576 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 23:45:28.302575  317576 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 23:45:28.302692  317576 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 23:45:28.302704  317576 kubeadm.go:310] 
	I1209 23:45:28.302832  317576 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 23:45:28.302940  317576 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 23:45:28.302950  317576 kubeadm.go:310] 
	I1209 23:45:28.303074  317576 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token t0i6y5.lu9zk5y4besjkjjw \
	I1209 23:45:28.303232  317576 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:583fd60292befb6d5ed15808dfd50d9d48e9bd0967c27b18f18e21a787f66432 \
	I1209 23:45:28.303270  317576 kubeadm.go:310] 	--control-plane 
	I1209 23:45:28.303281  317576 kubeadm.go:310] 
	I1209 23:45:28.303425  317576 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 23:45:28.303436  317576 kubeadm.go:310] 
	I1209 23:45:28.303513  317576 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token t0i6y5.lu9zk5y4besjkjjw \
	I1209 23:45:28.303669  317576 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:583fd60292befb6d5ed15808dfd50d9d48e9bd0967c27b18f18e21a787f66432 
	I1209 23:45:28.305441  317576 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 23:45:28.305589  317576 cni.go:84] Creating CNI manager for ""
	I1209 23:45:28.305611  317576 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1209 23:45:28.307512  317576 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 23:45:28.309037  317576 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 23:45:28.321001  317576 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 23:45:28.344225  317576 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 23:45:28.344321  317576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:45:28.344365  317576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-722117 minikube.k8s.io/updated_at=2024_12_09T23_45_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9 minikube.k8s.io/name=addons-722117 minikube.k8s.io/primary=true
	I1209 23:45:28.357621  317576 ops.go:34] apiserver oom_adj: -16
	I1209 23:45:28.469382  317576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:45:28.969632  317576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:45:29.469476  317576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:45:29.970384  317576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:45:30.469565  317576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:45:30.970200  317576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:45:31.469814  317576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:45:31.969542  317576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:45:32.056318  317576 kubeadm.go:1113] duration metric: took 3.71207519s to wait for elevateKubeSystemPrivileges
	I1209 23:45:32.056357  317576 kubeadm.go:394] duration metric: took 13.722490367s to StartCluster
	I1209 23:45:32.056378  317576 settings.go:142] acquiring lock: {Name:mke2137612bef65f16122c7f8145a256c627c62b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:45:32.056499  317576 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20062-309592/kubeconfig
	I1209 23:45:32.056962  317576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-309592/kubeconfig: {Name:mkece29445b2d34680b7e32e9b4a017e5e096a48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:45:32.057196  317576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1209 23:45:32.057195  317576 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.28 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1209 23:45:32.057228  317576 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1209 23:45:32.057443  317576 config.go:182] Loaded profile config "addons-722117": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1209 23:45:32.057465  317576 addons.go:69] Setting yakd=true in profile "addons-722117"
	I1209 23:45:32.057484  317576 addons.go:234] Setting addon yakd=true in "addons-722117"
	I1209 23:45:32.057480  317576 addons.go:69] Setting default-storageclass=true in profile "addons-722117"
	I1209 23:45:32.057497  317576 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-722117"
	I1209 23:45:32.057508  317576 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-722117"
	I1209 23:45:32.057514  317576 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-722117"
	I1209 23:45:32.057519  317576 host.go:66] Checking if "addons-722117" exists ...
	I1209 23:45:32.057535  317576 host.go:66] Checking if "addons-722117" exists ...
	I1209 23:45:32.057655  317576 addons.go:69] Setting gcp-auth=true in profile "addons-722117"
	I1209 23:45:32.057679  317576 mustload.go:65] Loading cluster: addons-722117
	I1209 23:45:32.057833  317576 config.go:182] Loaded profile config "addons-722117": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1209 23:45:32.057979  317576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:45:32.057991  317576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:45:32.058081  317576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:45:32.058177  317576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:45:32.058192  317576 addons.go:69] Setting ingress=true in profile "addons-722117"
	I1209 23:45:32.058209  317576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:45:32.058219  317576 addons.go:69] Setting ingress-dns=true in profile "addons-722117"
	I1209 23:45:32.058230  317576 addons.go:234] Setting addon ingress-dns=true in "addons-722117"
	I1209 23:45:32.058271  317576 addons.go:69] Setting inspektor-gadget=true in profile "addons-722117"
	I1209 23:45:32.058277  317576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:45:32.058280  317576 addons.go:234] Setting addon inspektor-gadget=true in "addons-722117"
	I1209 23:45:32.058297  317576 host.go:66] Checking if "addons-722117" exists ...
	I1209 23:45:32.058299  317576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:45:32.058171  317576 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-722117"
	I1209 23:45:32.058495  317576 addons.go:69] Setting volumesnapshots=true in profile "addons-722117"
	I1209 23:45:32.058502  317576 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-722117"
	I1209 23:45:32.058505  317576 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-722117"
	I1209 23:45:32.058511  317576 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-722117"
	I1209 23:45:32.058533  317576 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-722117"
	I1209 23:45:32.058606  317576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:45:32.058640  317576 host.go:66] Checking if "addons-722117" exists ...
	I1209 23:45:32.058612  317576 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-722117"
	I1209 23:45:32.058490  317576 addons.go:69] Setting volcano=true in profile "addons-722117"
	I1209 23:45:32.058722  317576 host.go:66] Checking if "addons-722117" exists ...
	I1209 23:45:32.058748  317576 addons.go:234] Setting addon volcano=true in "addons-722117"
	I1209 23:45:32.058822  317576 host.go:66] Checking if "addons-722117" exists ...
	I1209 23:45:32.058862  317576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:45:32.058891  317576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:45:32.058999  317576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:45:32.058210  317576 addons.go:234] Setting addon ingress=true in "addons-722117"
	I1209 23:45:32.059032  317576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:45:32.059063  317576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:45:32.059094  317576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:45:32.058474  317576 addons.go:69] Setting storage-provisioner=true in profile "addons-722117"
	I1209 23:45:32.059117  317576 addons.go:234] Setting addon storage-provisioner=true in "addons-722117"
	I1209 23:45:32.058484  317576 addons.go:69] Setting registry=true in profile "addons-722117"
	I1209 23:45:32.059154  317576 addons.go:234] Setting addon registry=true in "addons-722117"
	I1209 23:45:32.059182  317576 host.go:66] Checking if "addons-722117" exists ...
	I1209 23:45:32.059182  317576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:45:32.059218  317576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:45:32.059247  317576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:45:32.059278  317576 host.go:66] Checking if "addons-722117" exists ...
	I1209 23:45:32.059287  317576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:45:32.058514  317576 addons.go:234] Setting addon volumesnapshots=true in "addons-722117"
	I1209 23:45:32.058455  317576 addons.go:69] Setting metrics-server=true in profile "addons-722117"
	I1209 23:45:32.059345  317576 addons.go:234] Setting addon metrics-server=true in "addons-722117"
	I1209 23:45:32.057489  317576 addons.go:69] Setting cloud-spanner=true in profile "addons-722117"
	I1209 23:45:32.059542  317576 addons.go:234] Setting addon cloud-spanner=true in "addons-722117"
	I1209 23:45:32.059630  317576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:45:32.059646  317576 host.go:66] Checking if "addons-722117" exists ...
	I1209 23:45:32.059669  317576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:45:32.059716  317576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:45:32.059745  317576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:45:32.059772  317576 host.go:66] Checking if "addons-722117" exists ...
	I1209 23:45:32.059834  317576 host.go:66] Checking if "addons-722117" exists ...
	I1209 23:45:32.060227  317576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:45:32.060268  317576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:45:32.060515  317576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:45:32.060629  317576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:45:32.060730  317576 out.go:177] * Verifying Kubernetes components...
	I1209 23:45:32.060794  317576 host.go:66] Checking if "addons-722117" exists ...
	I1209 23:45:32.060798  317576 host.go:66] Checking if "addons-722117" exists ...
	I1209 23:45:32.062564  317576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:45:32.079697  317576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44813
	I1209 23:45:32.083360  317576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44151
	I1209 23:45:32.083626  317576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:45:32.083677  317576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:45:32.083726  317576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45685
	I1209 23:45:32.083756  317576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:45:32.083807  317576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:45:32.083732  317576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33619
	I1209 23:45:32.084017  317576 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:45:32.084187  317576 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:45:32.084211  317576 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:45:32.084402  317576 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:45:32.084424  317576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:45:32.084464  317576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:45:32.084941  317576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35657
	I1209 23:45:32.085089  317576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33987
	I1209 23:45:32.085363  317576 main.go:141] libmachine: Using API Version  1
	I1209 23:45:32.085378  317576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:45:32.085515  317576 main.go:141] libmachine: Using API Version  1
	I1209 23:45:32.085526  317576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:45:32.085659  317576 main.go:141] libmachine: Using API Version  1
	I1209 23:45:32.085670  317576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:45:32.085782  317576 main.go:141] libmachine: Using API Version  1
	I1209 23:45:32.085793  317576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:45:32.085852  317576 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:45:32.085905  317576 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:45:32.085972  317576 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:45:32.086039  317576 main.go:141] libmachine: (addons-722117) Calling .GetState
	I1209 23:45:32.086094  317576 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:45:32.086144  317576 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:45:32.086405  317576 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:45:32.086570  317576 main.go:141] libmachine: Using API Version  1
	I1209 23:45:32.086586  317576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:45:32.086652  317576 main.go:141] libmachine: (addons-722117) Calling .GetState
	I1209 23:45:32.086851  317576 main.go:141] libmachine: Using API Version  1
	I1209 23:45:32.086868  317576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:45:32.087457  317576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:45:32.087498  317576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:45:32.090091  317576 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:45:32.090227  317576 host.go:66] Checking if "addons-722117" exists ...
	I1209 23:45:32.090670  317576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:45:32.090738  317576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:45:32.091144  317576 addons.go:234] Setting addon default-storageclass=true in "addons-722117"
	I1209 23:45:32.091221  317576 host.go:66] Checking if "addons-722117" exists ...
	I1209 23:45:32.091589  317576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:45:32.091643  317576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:45:32.091945  317576 main.go:141] libmachine: (addons-722117) Calling .GetState
	I1209 23:45:32.092145  317576 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:45:32.092324  317576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:45:32.092366  317576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:45:32.092731  317576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:45:32.092764  317576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:45:32.103400  317576 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-722117"
	I1209 23:45:32.103464  317576 host.go:66] Checking if "addons-722117" exists ...
	I1209 23:45:32.103837  317576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:45:32.103889  317576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:45:32.124949  317576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43025
	I1209 23:45:32.125541  317576 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:45:32.126288  317576 main.go:141] libmachine: Using API Version  1
	I1209 23:45:32.126313  317576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:45:32.126531  317576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38271
	I1209 23:45:32.126740  317576 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:45:32.127221  317576 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:45:32.127710  317576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:45:32.127748  317576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:45:32.128526  317576 main.go:141] libmachine: Using API Version  1
	I1209 23:45:32.128547  317576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:45:32.128615  317576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34827
	I1209 23:45:32.129217  317576 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:45:32.129306  317576 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:45:32.129876  317576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:45:32.129920  317576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:45:32.130236  317576 main.go:141] libmachine: Using API Version  1
	I1209 23:45:32.130252  317576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:45:32.130644  317576 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:45:32.131263  317576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:45:32.131300  317576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:45:32.133299  317576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37603
	I1209 23:45:32.133976  317576 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:45:32.135888  317576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42291
	I1209 23:45:32.136160  317576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41957
	I1209 23:45:32.136486  317576 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:45:32.137034  317576 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:45:32.137120  317576 main.go:141] libmachine: Using API Version  1
	I1209 23:45:32.137138  317576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:45:32.137652  317576 main.go:141] libmachine: Using API Version  1
	I1209 23:45:32.137673  317576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:45:32.137690  317576 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:45:32.138077  317576 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:45:32.138079  317576 main.go:141] libmachine: (addons-722117) Calling .GetState
	I1209 23:45:32.138721  317576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:45:32.138769  317576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:45:32.139199  317576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40609
	I1209 23:45:32.139326  317576 main.go:141] libmachine: Using API Version  1
	I1209 23:45:32.139353  317576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:45:32.139778  317576 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:45:32.139858  317576 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:45:32.140582  317576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:45:32.140632  317576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:45:32.140929  317576 main.go:141] libmachine: Using API Version  1
	I1209 23:45:32.140950  317576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:45:32.141120  317576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43417
	I1209 23:45:32.141456  317576 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:45:32.142004  317576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:45:32.142044  317576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:45:32.142165  317576 main.go:141] libmachine: (addons-722117) Calling .DriverName
	I1209 23:45:32.142686  317576 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:45:32.143308  317576 main.go:141] libmachine: Using API Version  1
	I1209 23:45:32.143336  317576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:45:32.143729  317576 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:45:32.144240  317576 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1209 23:45:32.144335  317576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:45:32.144371  317576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:45:32.145818  317576 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1209 23:45:32.145838  317576 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1209 23:45:32.145861  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHHostname
	I1209 23:45:32.146880  317576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41519
	I1209 23:45:32.147469  317576 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:45:32.148041  317576 main.go:141] libmachine: Using API Version  1
	I1209 23:45:32.148071  317576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:45:32.148442  317576 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:45:32.149020  317576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:45:32.149058  317576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:45:32.149291  317576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42537
	I1209 23:45:32.149337  317576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46057
	I1209 23:45:32.150050  317576 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:45:32.150600  317576 main.go:141] libmachine: Using API Version  1
	I1209 23:45:32.150619  317576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:45:32.151060  317576 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:45:32.151282  317576 main.go:141] libmachine: (addons-722117) Calling .DriverName
	I1209 23:45:32.153125  317576 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:45:32.153221  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:32.153243  317576 main.go:141] libmachine: (addons-722117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:b8:62", ip: ""} in network mk-addons-722117: {Iface:virbr1 ExpiryTime:2024-12-10 00:45:00 +0000 UTC Type:0 Mac:52:54:00:fe:b8:62 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-722117 Clientid:01:52:54:00:fe:b8:62}
	I1209 23:45:32.153259  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined IP address 192.168.39.28 and MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:32.153285  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHPort
	I1209 23:45:32.153343  317576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33607
	I1209 23:45:32.153505  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHKeyPath
	I1209 23:45:32.153668  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHUsername
	I1209 23:45:32.153752  317576 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:45:32.154087  317576 main.go:141] libmachine: Using API Version  1
	I1209 23:45:32.154108  317576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:45:32.154171  317576 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-309592/.minikube/machines/addons-722117/id_rsa Username:docker}
	I1209 23:45:32.154527  317576 main.go:141] libmachine: Using API Version  1
	I1209 23:45:32.154542  317576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:45:32.154887  317576 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:45:32.154937  317576 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:45:32.156015  317576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:45:32.156043  317576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:45:32.156057  317576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:45:32.156077  317576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:45:32.157324  317576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43477
	I1209 23:45:32.157755  317576 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:45:32.158501  317576 main.go:141] libmachine: Using API Version  1
	I1209 23:45:32.158527  317576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:45:32.159018  317576 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:45:32.159638  317576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:45:32.159680  317576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:45:32.165348  317576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40777
	I1209 23:45:32.175327  317576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43117
	I1209 23:45:32.175856  317576 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:45:32.175858  317576 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:45:32.176443  317576 main.go:141] libmachine: Using API Version  1
	I1209 23:45:32.176496  317576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:45:32.176575  317576 main.go:141] libmachine: Using API Version  1
	I1209 23:45:32.176593  317576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:45:32.176934  317576 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:45:32.177003  317576 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:45:32.177649  317576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:45:32.177695  317576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:45:32.177952  317576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35589
	I1209 23:45:32.178310  317576 main.go:141] libmachine: (addons-722117) Calling .GetState
	I1209 23:45:32.179992  317576 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:45:32.180260  317576 main.go:141] libmachine: (addons-722117) Calling .DriverName
	I1209 23:45:32.180828  317576 main.go:141] libmachine: Using API Version  1
	I1209 23:45:32.180850  317576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:45:32.181248  317576 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:45:32.181309  317576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34705
	I1209 23:45:32.181931  317576 main.go:141] libmachine: (addons-722117) Calling .GetState
	I1209 23:45:32.182006  317576 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:45:32.182498  317576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36873
	I1209 23:45:32.182740  317576 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1209 23:45:32.183250  317576 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:45:32.183839  317576 main.go:141] libmachine: Using API Version  1
	I1209 23:45:32.183857  317576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:45:32.184050  317576 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1209 23:45:32.184079  317576 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1209 23:45:32.184103  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHHostname
	I1209 23:45:32.184113  317576 main.go:141] libmachine: Using API Version  1
	I1209 23:45:32.184127  317576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:45:32.184175  317576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45025
	I1209 23:45:32.184350  317576 main.go:141] libmachine: (addons-722117) Calling .DriverName
	I1209 23:45:32.184578  317576 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 23:45:32.184596  317576 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 23:45:32.184613  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHHostname
	I1209 23:45:32.184688  317576 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:45:32.185264  317576 main.go:141] libmachine: Using API Version  1
	I1209 23:45:32.185284  317576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:45:32.185347  317576 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:45:32.185387  317576 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:45:32.185813  317576 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:45:32.185872  317576 main.go:141] libmachine: (addons-722117) Calling .GetState
	I1209 23:45:32.186087  317576 main.go:141] libmachine: (addons-722117) Calling .GetState
	I1209 23:45:32.186824  317576 main.go:141] libmachine: (addons-722117) Calling .GetState
	I1209 23:45:32.188268  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:32.188692  317576 main.go:141] libmachine: (addons-722117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:b8:62", ip: ""} in network mk-addons-722117: {Iface:virbr1 ExpiryTime:2024-12-10 00:45:00 +0000 UTC Type:0 Mac:52:54:00:fe:b8:62 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-722117 Clientid:01:52:54:00:fe:b8:62}
	I1209 23:45:32.188711  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined IP address 192.168.39.28 and MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:32.188800  317576 main.go:141] libmachine: (addons-722117) Calling .DriverName
	I1209 23:45:32.188962  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHPort
	I1209 23:45:32.189039  317576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40531
	I1209 23:45:32.189465  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHKeyPath
	I1209 23:45:32.189793  317576 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:45:32.189899  317576 main.go:141] libmachine: (addons-722117) Calling .DriverName
	I1209 23:45:32.190202  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHUsername
	I1209 23:45:32.190473  317576 main.go:141] libmachine: Using API Version  1
	I1209 23:45:32.190495  317576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:45:32.190545  317576 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-309592/.minikube/machines/addons-722117/id_rsa Username:docker}
	I1209 23:45:32.190753  317576 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1209 23:45:32.190918  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:32.191280  317576 main.go:141] libmachine: (addons-722117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:b8:62", ip: ""} in network mk-addons-722117: {Iface:virbr1 ExpiryTime:2024-12-10 00:45:00 +0000 UTC Type:0 Mac:52:54:00:fe:b8:62 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-722117 Clientid:01:52:54:00:fe:b8:62}
	I1209 23:45:32.191305  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined IP address 192.168.39.28 and MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:32.191522  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHPort
	I1209 23:45:32.191805  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHKeyPath
	I1209 23:45:32.192127  317576 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:45:32.192185  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHUsername
	I1209 23:45:32.192193  317576 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1209 23:45:32.192381  317576 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1209 23:45:32.192399  317576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1209 23:45:32.192399  317576 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-309592/.minikube/machines/addons-722117/id_rsa Username:docker}
	I1209 23:45:32.192417  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHHostname
	I1209 23:45:32.193133  317576 main.go:141] libmachine: (addons-722117) Calling .GetState
	I1209 23:45:32.193208  317576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43167
	I1209 23:45:32.193697  317576 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:45:32.193952  317576 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 23:45:32.193968  317576 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 23:45:32.193996  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHHostname
	I1209 23:45:32.194236  317576 main.go:141] libmachine: Using API Version  1
	I1209 23:45:32.194261  317576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:45:32.194583  317576 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:45:32.194722  317576 main.go:141] libmachine: (addons-722117) Calling .GetState
	I1209 23:45:32.196713  317576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33011
	I1209 23:45:32.197187  317576 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:45:32.197732  317576 main.go:141] libmachine: Using API Version  1
	I1209 23:45:32.197755  317576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:45:32.198256  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:32.198566  317576 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:45:32.198807  317576 main.go:141] libmachine: (addons-722117) Calling .GetState
	I1209 23:45:32.198874  317576 main.go:141] libmachine: (addons-722117) Calling .DriverName
	I1209 23:45:32.198929  317576 main.go:141] libmachine: (addons-722117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:b8:62", ip: ""} in network mk-addons-722117: {Iface:virbr1 ExpiryTime:2024-12-10 00:45:00 +0000 UTC Type:0 Mac:52:54:00:fe:b8:62 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-722117 Clientid:01:52:54:00:fe:b8:62}
	I1209 23:45:32.198946  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined IP address 192.168.39.28 and MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:32.199442  317576 main.go:141] libmachine: (addons-722117) Calling .DriverName
	I1209 23:45:32.199566  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:32.199665  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHPort
	I1209 23:45:32.199910  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHKeyPath
	I1209 23:45:32.200149  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHUsername
	I1209 23:45:32.200178  317576 main.go:141] libmachine: (addons-722117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:b8:62", ip: ""} in network mk-addons-722117: {Iface:virbr1 ExpiryTime:2024-12-10 00:45:00 +0000 UTC Type:0 Mac:52:54:00:fe:b8:62 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-722117 Clientid:01:52:54:00:fe:b8:62}
	I1209 23:45:32.200200  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined IP address 192.168.39.28 and MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:32.200334  317576 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-309592/.minikube/machines/addons-722117/id_rsa Username:docker}
	I1209 23:45:32.200359  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHPort
	I1209 23:45:32.200529  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHKeyPath
	I1209 23:45:32.200592  317576 main.go:141] libmachine: (addons-722117) Calling .DriverName
	I1209 23:45:32.200639  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHUsername
	I1209 23:45:32.200804  317576 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-309592/.minikube/machines/addons-722117/id_rsa Username:docker}
	I1209 23:45:32.200933  317576 main.go:141] libmachine: (addons-722117) Calling .DriverName
	I1209 23:45:32.201382  317576 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1209 23:45:32.201421  317576 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1209 23:45:32.202468  317576 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1209 23:45:32.202490  317576 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1209 23:45:32.204060  317576 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1209 23:45:32.204081  317576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1209 23:45:32.204101  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHHostname
	I1209 23:45:32.204209  317576 out.go:177]   - Using image docker.io/registry:2.8.3
	I1209 23:45:32.204399  317576 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1209 23:45:32.204413  317576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1209 23:45:32.204429  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHHostname
	I1209 23:45:32.205530  317576 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1209 23:45:32.206293  317576 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1209 23:45:32.206312  317576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1209 23:45:32.206331  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHHostname
	I1209 23:45:32.208983  317576 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1209 23:45:32.209227  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:32.209659  317576 main.go:141] libmachine: (addons-722117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:b8:62", ip: ""} in network mk-addons-722117: {Iface:virbr1 ExpiryTime:2024-12-10 00:45:00 +0000 UTC Type:0 Mac:52:54:00:fe:b8:62 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-722117 Clientid:01:52:54:00:fe:b8:62}
	I1209 23:45:32.209684  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined IP address 192.168.39.28 and MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:32.209860  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHPort
	I1209 23:45:32.210097  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHKeyPath
	I1209 23:45:32.210293  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHUsername
	I1209 23:45:32.210470  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:32.210502  317576 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-309592/.minikube/machines/addons-722117/id_rsa Username:docker}
	I1209 23:45:32.211588  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:32.211642  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHPort
	I1209 23:45:32.211706  317576 main.go:141] libmachine: (addons-722117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:b8:62", ip: ""} in network mk-addons-722117: {Iface:virbr1 ExpiryTime:2024-12-10 00:45:00 +0000 UTC Type:0 Mac:52:54:00:fe:b8:62 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-722117 Clientid:01:52:54:00:fe:b8:62}
	I1209 23:45:32.211720  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined IP address 192.168.39.28 and MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:32.211866  317576 main.go:141] libmachine: (addons-722117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:b8:62", ip: ""} in network mk-addons-722117: {Iface:virbr1 ExpiryTime:2024-12-10 00:45:00 +0000 UTC Type:0 Mac:52:54:00:fe:b8:62 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-722117 Clientid:01:52:54:00:fe:b8:62}
	I1209 23:45:32.211885  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined IP address 192.168.39.28 and MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:32.211911  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHKeyPath
	I1209 23:45:32.212110  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHUsername
	I1209 23:45:32.212157  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHPort
	I1209 23:45:32.212272  317576 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-309592/.minikube/machines/addons-722117/id_rsa Username:docker}
	I1209 23:45:32.212372  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHKeyPath
	I1209 23:45:32.212732  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHUsername
	I1209 23:45:32.212917  317576 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1209 23:45:32.213091  317576 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-309592/.minikube/machines/addons-722117/id_rsa Username:docker}
	I1209 23:45:32.214563  317576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41023
	I1209 23:45:32.214847  317576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39765
	I1209 23:45:32.215006  317576 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:45:32.215636  317576 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:45:32.215996  317576 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1209 23:45:32.216524  317576 main.go:141] libmachine: Using API Version  1
	I1209 23:45:32.216545  317576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:45:32.217128  317576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36071
	I1209 23:45:32.217262  317576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36219
	I1209 23:45:32.217483  317576 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:45:32.217730  317576 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:45:32.217825  317576 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:45:32.218279  317576 main.go:141] libmachine: Using API Version  1
	I1209 23:45:32.218306  317576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:45:32.218375  317576 main.go:141] libmachine: (addons-722117) Calling .GetState
	I1209 23:45:32.218501  317576 main.go:141] libmachine: Using API Version  1
	I1209 23:45:32.218520  317576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:45:32.218643  317576 main.go:141] libmachine: Using API Version  1
	I1209 23:45:32.218657  317576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:45:32.218665  317576 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1209 23:45:32.219056  317576 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:45:32.219179  317576 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:45:32.219283  317576 main.go:141] libmachine: (addons-722117) Calling .GetState
	I1209 23:45:32.219415  317576 main.go:141] libmachine: (addons-722117) Calling .GetState
	I1209 23:45:32.220401  317576 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:45:32.220645  317576 main.go:141] libmachine: (addons-722117) Calling .GetState
	I1209 23:45:32.221036  317576 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1209 23:45:32.221289  317576 main.go:141] libmachine: (addons-722117) Calling .DriverName
	I1209 23:45:32.222304  317576 main.go:141] libmachine: (addons-722117) Calling .DriverName
	I1209 23:45:32.222382  317576 main.go:141] libmachine: (addons-722117) Calling .DriverName
	I1209 23:45:32.223284  317576 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1209 23:45:32.223337  317576 main.go:141] libmachine: (addons-722117) Calling .DriverName
	I1209 23:45:32.224076  317576 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1209 23:45:32.224096  317576 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1209 23:45:32.224079  317576 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I1209 23:45:32.224358  317576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38233
	I1209 23:45:32.224963  317576 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1209 23:45:32.225319  317576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1209 23:45:32.225346  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHHostname
	I1209 23:45:32.225042  317576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40953
	I1209 23:45:32.225600  317576 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1209 23:45:32.225996  317576 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1209 23:45:32.226008  317576 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1209 23:45:32.226022  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHHostname
	I1209 23:45:32.225868  317576 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:45:32.225876  317576 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:45:32.226569  317576 main.go:141] libmachine: Using API Version  1
	I1209 23:45:32.226795  317576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:45:32.226655  317576 main.go:141] libmachine: Using API Version  1
	I1209 23:45:32.226832  317576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:45:32.227082  317576 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1209 23:45:32.227103  317576 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1209 23:45:32.227123  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHHostname
	I1209 23:45:32.227244  317576 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:45:32.227333  317576 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:45:32.227652  317576 main.go:141] libmachine: (addons-722117) Calling .GetState
	I1209 23:45:32.227852  317576 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I1209 23:45:32.227914  317576 main.go:141] libmachine: (addons-722117) Calling .GetState
	I1209 23:45:32.228826  317576 out.go:177]   - Using image docker.io/busybox:stable
	I1209 23:45:32.230406  317576 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I1209 23:45:32.230572  317576 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1209 23:45:32.230590  317576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1209 23:45:32.230602  317576 main.go:141] libmachine: (addons-722117) Calling .DriverName
	I1209 23:45:32.230611  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHHostname
	I1209 23:45:32.231445  317576 main.go:141] libmachine: (addons-722117) Calling .DriverName
	I1209 23:45:32.231906  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:32.232825  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:32.233101  317576 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1209 23:45:32.233117  317576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
	I1209 23:45:32.233120  317576 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:45:32.233140  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHHostname
	I1209 23:45:32.233228  317576 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1209 23:45:32.233622  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:32.233958  317576 main.go:141] libmachine: (addons-722117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:b8:62", ip: ""} in network mk-addons-722117: {Iface:virbr1 ExpiryTime:2024-12-10 00:45:00 +0000 UTC Type:0 Mac:52:54:00:fe:b8:62 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-722117 Clientid:01:52:54:00:fe:b8:62}
	I1209 23:45:32.234000  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined IP address 192.168.39.28 and MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:32.234236  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHPort
	I1209 23:45:32.234423  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHKeyPath
	I1209 23:45:32.234654  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHUsername
	I1209 23:45:32.234680  317576 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 23:45:32.234696  317576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 23:45:32.234714  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHHostname
	I1209 23:45:32.234899  317576 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-309592/.minikube/machines/addons-722117/id_rsa Username:docker}
	I1209 23:45:32.235264  317576 main.go:141] libmachine: (addons-722117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:b8:62", ip: ""} in network mk-addons-722117: {Iface:virbr1 ExpiryTime:2024-12-10 00:45:00 +0000 UTC Type:0 Mac:52:54:00:fe:b8:62 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-722117 Clientid:01:52:54:00:fe:b8:62}
	I1209 23:45:32.235283  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined IP address 192.168.39.28 and MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:32.235536  317576 main.go:141] libmachine: (addons-722117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:b8:62", ip: ""} in network mk-addons-722117: {Iface:virbr1 ExpiryTime:2024-12-10 00:45:00 +0000 UTC Type:0 Mac:52:54:00:fe:b8:62 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-722117 Clientid:01:52:54:00:fe:b8:62}
	I1209 23:45:32.235567  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined IP address 192.168.39.28 and MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:32.235747  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHPort
	I1209 23:45:32.235966  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHKeyPath
	I1209 23:45:32.236218  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHUsername
	I1209 23:45:32.236304  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHPort
	I1209 23:45:32.236479  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHKeyPath
	I1209 23:45:32.236567  317576 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-309592/.minikube/machines/addons-722117/id_rsa Username:docker}
	I1209 23:45:32.236649  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHUsername
	I1209 23:45:32.236707  317576 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1209 23:45:32.237365  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:32.237367  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:32.237379  317576 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-309592/.minikube/machines/addons-722117/id_rsa Username:docker}
	I1209 23:45:32.237823  317576 main.go:141] libmachine: (addons-722117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:b8:62", ip: ""} in network mk-addons-722117: {Iface:virbr1 ExpiryTime:2024-12-10 00:45:00 +0000 UTC Type:0 Mac:52:54:00:fe:b8:62 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-722117 Clientid:01:52:54:00:fe:b8:62}
	I1209 23:45:32.237846  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined IP address 192.168.39.28 and MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:32.237875  317576 main.go:141] libmachine: (addons-722117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:b8:62", ip: ""} in network mk-addons-722117: {Iface:virbr1 ExpiryTime:2024-12-10 00:45:00 +0000 UTC Type:0 Mac:52:54:00:fe:b8:62 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-722117 Clientid:01:52:54:00:fe:b8:62}
	I1209 23:45:32.237888  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined IP address 192.168.39.28 and MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:32.238136  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHPort
	I1209 23:45:32.238191  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHPort
	I1209 23:45:32.238342  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHKeyPath
	I1209 23:45:32.238330  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHKeyPath
	I1209 23:45:32.238494  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHUsername
	I1209 23:45:32.238564  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHUsername
	I1209 23:45:32.238626  317576 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-309592/.minikube/machines/addons-722117/id_rsa Username:docker}
	W1209 23:45:32.239315  317576 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:43416->192.168.39.28:22: read: connection reset by peer
	I1209 23:45:32.239345  317576 retry.go:31] will retry after 213.037542ms: ssh: handshake failed: read tcp 192.168.39.1:43416->192.168.39.28:22: read: connection reset by peer
	I1209 23:45:32.239423  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:32.238677  317576 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-309592/.minikube/machines/addons-722117/id_rsa Username:docker}
	I1209 23:45:32.239484  317576 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1209 23:45:32.240086  317576 main.go:141] libmachine: (addons-722117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:b8:62", ip: ""} in network mk-addons-722117: {Iface:virbr1 ExpiryTime:2024-12-10 00:45:00 +0000 UTC Type:0 Mac:52:54:00:fe:b8:62 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-722117 Clientid:01:52:54:00:fe:b8:62}
	I1209 23:45:32.240106  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined IP address 192.168.39.28 and MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:32.240335  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHPort
	I1209 23:45:32.240486  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHKeyPath
	I1209 23:45:32.240610  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHUsername
	I1209 23:45:32.240713  317576 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-309592/.minikube/machines/addons-722117/id_rsa Username:docker}
	I1209 23:45:32.240839  317576 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1209 23:45:32.240854  317576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1209 23:45:32.240867  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHHostname
	I1209 23:45:32.243542  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:32.243873  317576 main.go:141] libmachine: (addons-722117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:b8:62", ip: ""} in network mk-addons-722117: {Iface:virbr1 ExpiryTime:2024-12-10 00:45:00 +0000 UTC Type:0 Mac:52:54:00:fe:b8:62 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-722117 Clientid:01:52:54:00:fe:b8:62}
	I1209 23:45:32.243899  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined IP address 192.168.39.28 and MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:32.244047  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHPort
	I1209 23:45:32.244228  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHKeyPath
	I1209 23:45:32.244382  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHUsername
	I1209 23:45:32.244530  317576 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-309592/.minikube/machines/addons-722117/id_rsa Username:docker}
	I1209 23:45:32.618803  317576 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1209 23:45:32.618843  317576 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1209 23:45:32.697701  317576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1209 23:45:32.709749  317576 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1209 23:45:32.709778  317576 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1209 23:45:32.710702  317576 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1209 23:45:32.710730  317576 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1209 23:45:32.747923  317576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 23:45:32.747964  317576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1209 23:45:32.755284  317576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1209 23:45:32.809261  317576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 23:45:32.818947  317576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1209 23:45:32.827770  317576 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1209 23:45:32.827805  317576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1209 23:45:32.829996  317576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1209 23:45:32.875826  317576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:45:32.875845  317576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1209 23:45:32.897336  317576 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1209 23:45:32.897369  317576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1209 23:45:32.908187  317576 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1209 23:45:32.908228  317576 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1209 23:45:32.921978  317576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1209 23:45:32.978402  317576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 23:45:33.016765  317576 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1209 23:45:33.016803  317576 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1209 23:45:33.081198  317576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1209 23:45:33.099509  317576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1209 23:45:33.111250  317576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 23:45:33.111283  317576 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 23:45:33.129494  317576 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1209 23:45:33.129536  317576 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1209 23:45:33.258646  317576 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1209 23:45:33.258688  317576 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1209 23:45:33.394430  317576 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1209 23:45:33.394465  317576 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1209 23:45:33.451535  317576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1209 23:45:33.561833  317576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1209 23:45:33.678267  317576 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 23:45:33.678300  317576 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 23:45:33.740008  317576 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1209 23:45:33.740039  317576 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1209 23:45:33.767201  317576 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1209 23:45:33.767230  317576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1209 23:45:33.851772  317576 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1209 23:45:33.851804  317576 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1209 23:45:34.430154  317576 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1209 23:45:34.430191  317576 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1209 23:45:34.445291  317576 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1209 23:45:34.445321  317576 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1209 23:45:34.558684  317576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 23:45:34.587503  317576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1209 23:45:34.759905  317576 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 23:45:34.759930  317576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1209 23:45:34.763651  317576 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1209 23:45:34.763678  317576 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1209 23:45:34.912510  317576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.214759931s)
	I1209 23:45:34.912591  317576 main.go:141] libmachine: Making call to close driver server
	I1209 23:45:34.912608  317576 main.go:141] libmachine: (addons-722117) Calling .Close
	I1209 23:45:34.912962  317576 main.go:141] libmachine: (addons-722117) DBG | Closing plugin on server side
	I1209 23:45:34.912962  317576 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:45:34.912991  317576 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:45:34.913002  317576 main.go:141] libmachine: Making call to close driver server
	I1209 23:45:34.913010  317576 main.go:141] libmachine: (addons-722117) Calling .Close
	I1209 23:45:34.913426  317576 main.go:141] libmachine: (addons-722117) DBG | Closing plugin on server side
	I1209 23:45:34.913463  317576 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:45:34.913472  317576 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:45:35.120820  317576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 23:45:35.126288  317576 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1209 23:45:35.126318  317576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1209 23:45:35.638684  317576 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1209 23:45:35.638716  317576 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1209 23:45:36.116478  317576 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1209 23:45:36.116516  317576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1209 23:45:36.400597  317576 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1209 23:45:36.400624  317576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1209 23:45:36.441822  317576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.686488732s)
	I1209 23:45:36.441878  317576 main.go:141] libmachine: Making call to close driver server
	I1209 23:45:36.441887  317576 main.go:141] libmachine: (addons-722117) Calling .Close
	I1209 23:45:36.441922  317576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.632612284s)
	I1209 23:45:36.441989  317576 main.go:141] libmachine: Making call to close driver server
	I1209 23:45:36.442006  317576 main.go:141] libmachine: (addons-722117) Calling .Close
	I1209 23:45:36.442240  317576 main.go:141] libmachine: (addons-722117) DBG | Closing plugin on server side
	I1209 23:45:36.442286  317576 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:45:36.442307  317576 main.go:141] libmachine: (addons-722117) DBG | Closing plugin on server side
	I1209 23:45:36.442312  317576 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:45:36.442322  317576 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:45:36.442330  317576 main.go:141] libmachine: Making call to close driver server
	I1209 23:45:36.442337  317576 main.go:141] libmachine: (addons-722117) Calling .Close
	I1209 23:45:36.442339  317576 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:45:36.442361  317576 main.go:141] libmachine: Making call to close driver server
	I1209 23:45:36.442377  317576 main.go:141] libmachine: (addons-722117) Calling .Close
	I1209 23:45:36.442634  317576 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:45:36.442693  317576 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:45:36.442695  317576 main.go:141] libmachine: (addons-722117) DBG | Closing plugin on server side
	I1209 23:45:36.442707  317576 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:45:36.442721  317576 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:45:36.463109  317576 main.go:141] libmachine: Making call to close driver server
	I1209 23:45:36.463141  317576 main.go:141] libmachine: (addons-722117) Calling .Close
	I1209 23:45:36.463551  317576 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:45:36.463571  317576 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:45:36.719120  317576 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1209 23:45:36.719156  317576 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1209 23:45:37.264979  317576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1209 23:45:37.458948  317576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.639955789s)
	I1209 23:45:37.459024  317576 main.go:141] libmachine: Making call to close driver server
	I1209 23:45:37.459064  317576 main.go:141] libmachine: (addons-722117) Calling .Close
	I1209 23:45:37.459475  317576 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:45:37.459496  317576 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:45:37.459506  317576 main.go:141] libmachine: Making call to close driver server
	I1209 23:45:37.459502  317576 main.go:141] libmachine: (addons-722117) DBG | Closing plugin on server side
	I1209 23:45:37.459514  317576 main.go:141] libmachine: (addons-722117) Calling .Close
	I1209 23:45:37.459791  317576 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:45:37.459812  317576 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:45:38.517126  317576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.687089253s)
	I1209 23:45:38.517192  317576 main.go:141] libmachine: Making call to close driver server
	I1209 23:45:38.517206  317576 main.go:141] libmachine: (addons-722117) Calling .Close
	I1209 23:45:38.517210  317576 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.641335685s)
	I1209 23:45:38.517314  317576 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.641438412s)
	I1209 23:45:38.517338  317576 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1209 23:45:38.517375  317576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.595365758s)
	I1209 23:45:38.517423  317576 main.go:141] libmachine: Making call to close driver server
	I1209 23:45:38.517448  317576 main.go:141] libmachine: (addons-722117) Calling .Close
	I1209 23:45:38.517466  317576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.539028388s)
	I1209 23:45:38.517508  317576 main.go:141] libmachine: Making call to close driver server
	I1209 23:45:38.517522  317576 main.go:141] libmachine: (addons-722117) Calling .Close
	I1209 23:45:38.518359  317576 node_ready.go:35] waiting up to 6m0s for node "addons-722117" to be "Ready" ...
	I1209 23:45:38.519225  317576 main.go:141] libmachine: (addons-722117) DBG | Closing plugin on server side
	I1209 23:45:38.519237  317576 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:45:38.519254  317576 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:45:38.519257  317576 main.go:141] libmachine: (addons-722117) DBG | Closing plugin on server side
	I1209 23:45:38.519264  317576 main.go:141] libmachine: Making call to close driver server
	I1209 23:45:38.519272  317576 main.go:141] libmachine: (addons-722117) Calling .Close
	I1209 23:45:38.519276  317576 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:45:38.519282  317576 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:45:38.519284  317576 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:45:38.519291  317576 main.go:141] libmachine: Making call to close driver server
	I1209 23:45:38.519295  317576 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:45:38.519298  317576 main.go:141] libmachine: (addons-722117) Calling .Close
	I1209 23:45:38.519306  317576 main.go:141] libmachine: Making call to close driver server
	I1209 23:45:38.519483  317576 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:45:38.519497  317576 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:45:38.519589  317576 main.go:141] libmachine: (addons-722117) DBG | Closing plugin on server side
	I1209 23:45:38.519602  317576 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:45:38.519612  317576 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:45:38.520879  317576 main.go:141] libmachine: (addons-722117) Calling .Close
	I1209 23:45:38.521150  317576 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:45:38.521183  317576 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:45:38.521149  317576 main.go:141] libmachine: (addons-722117) DBG | Closing plugin on server side
	I1209 23:45:38.567669  317576 node_ready.go:49] node "addons-722117" has status "Ready":"True"
	I1209 23:45:38.567694  317576 node_ready.go:38] duration metric: took 49.309101ms for node "addons-722117" to be "Ready" ...
	I1209 23:45:38.567704  317576 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:45:38.649032  317576 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-j9jp7" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:38.664133  317576 main.go:141] libmachine: Making call to close driver server
	I1209 23:45:38.664160  317576 main.go:141] libmachine: (addons-722117) Calling .Close
	I1209 23:45:38.664606  317576 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:45:38.664636  317576 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:45:39.062465  317576 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-722117" context rescaled to 1 replicas
	I1209 23:45:39.212284  317576 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1209 23:45:39.212336  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHHostname
	I1209 23:45:39.215413  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:39.215940  317576 main.go:141] libmachine: (addons-722117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:b8:62", ip: ""} in network mk-addons-722117: {Iface:virbr1 ExpiryTime:2024-12-10 00:45:00 +0000 UTC Type:0 Mac:52:54:00:fe:b8:62 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-722117 Clientid:01:52:54:00:fe:b8:62}
	I1209 23:45:39.215974  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined IP address 192.168.39.28 and MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:39.216125  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHPort
	I1209 23:45:39.216358  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHKeyPath
	I1209 23:45:39.216553  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHUsername
	I1209 23:45:39.216720  317576 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-309592/.minikube/machines/addons-722117/id_rsa Username:docker}
	I1209 23:45:39.800860  317576 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1209 23:45:40.206705  317576 addons.go:234] Setting addon gcp-auth=true in "addons-722117"
	I1209 23:45:40.206789  317576 host.go:66] Checking if "addons-722117" exists ...
	I1209 23:45:40.207262  317576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:45:40.207323  317576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:45:40.223571  317576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35659
	I1209 23:45:40.224026  317576 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:45:40.224610  317576 main.go:141] libmachine: Using API Version  1
	I1209 23:45:40.224643  317576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:45:40.225056  317576 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:45:40.225697  317576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:45:40.225763  317576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:45:40.241078  317576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36337
	I1209 23:45:40.241658  317576 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:45:40.242292  317576 main.go:141] libmachine: Using API Version  1
	I1209 23:45:40.242323  317576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:45:40.242707  317576 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:45:40.242902  317576 main.go:141] libmachine: (addons-722117) Calling .GetState
	I1209 23:45:40.244577  317576 main.go:141] libmachine: (addons-722117) Calling .DriverName
	I1209 23:45:40.244825  317576 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1209 23:45:40.244849  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHHostname
	I1209 23:45:40.247576  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:40.248021  317576 main.go:141] libmachine: (addons-722117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:b8:62", ip: ""} in network mk-addons-722117: {Iface:virbr1 ExpiryTime:2024-12-10 00:45:00 +0000 UTC Type:0 Mac:52:54:00:fe:b8:62 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-722117 Clientid:01:52:54:00:fe:b8:62}
	I1209 23:45:40.248048  317576 main.go:141] libmachine: (addons-722117) DBG | domain addons-722117 has defined IP address 192.168.39.28 and MAC address 52:54:00:fe:b8:62 in network mk-addons-722117
	I1209 23:45:40.248213  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHPort
	I1209 23:45:40.248456  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHKeyPath
	I1209 23:45:40.248647  317576 main.go:141] libmachine: (addons-722117) Calling .GetSSHUsername
	I1209 23:45:40.248801  317576 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-309592/.minikube/machines/addons-722117/id_rsa Username:docker}
	I1209 23:45:40.655313  317576 pod_ready.go:103] pod "amd-gpu-device-plugin-j9jp7" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:42.723221  317576 pod_ready.go:103] pod "amd-gpu-device-plugin-j9jp7" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:42.764237  317576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.682976747s)
	I1209 23:45:42.764304  317576 main.go:141] libmachine: Making call to close driver server
	I1209 23:45:42.764316  317576 main.go:141] libmachine: (addons-722117) Calling .Close
	I1209 23:45:42.764623  317576 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:45:42.764642  317576 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:45:42.764652  317576 main.go:141] libmachine: Making call to close driver server
	I1209 23:45:42.764659  317576 main.go:141] libmachine: (addons-722117) Calling .Close
	I1209 23:45:42.765015  317576 main.go:141] libmachine: (addons-722117) DBG | Closing plugin on server side
	I1209 23:45:42.765054  317576 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:45:42.765073  317576 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:45:42.765099  317576 addons.go:475] Verifying addon ingress=true in "addons-722117"
	I1209 23:45:42.766757  317576 out.go:177] * Verifying ingress addon...
	I1209 23:45:42.768617  317576 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1209 23:45:42.774348  317576 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1209 23:45:42.774368  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:43.324402  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:43.852040  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:44.310845  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:44.781543  317576 pod_ready.go:103] pod "amd-gpu-device-plugin-j9jp7" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:44.885212  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:45.321773  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:45.737340  317576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (12.285762314s)
	I1209 23:45:45.737408  317576 main.go:141] libmachine: Making call to close driver server
	I1209 23:45:45.737410  317576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (12.175544003s)
	I1209 23:45:45.737454  317576 main.go:141] libmachine: Making call to close driver server
	I1209 23:45:45.737417  317576 main.go:141] libmachine: (addons-722117) Calling .Close
	I1209 23:45:45.737507  317576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.178790902s)
	I1209 23:45:45.737529  317576 main.go:141] libmachine: Making call to close driver server
	I1209 23:45:45.737349  317576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (12.637791706s)
	I1209 23:45:45.737469  317576 main.go:141] libmachine: (addons-722117) Calling .Close
	I1209 23:45:45.737568  317576 main.go:141] libmachine: Making call to close driver server
	I1209 23:45:45.737580  317576 main.go:141] libmachine: (addons-722117) Calling .Close
	I1209 23:45:45.737589  317576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (11.150053041s)
	I1209 23:45:45.737602  317576 main.go:141] libmachine: Making call to close driver server
	I1209 23:45:45.737610  317576 main.go:141] libmachine: (addons-722117) Calling .Close
	I1209 23:45:45.737540  317576 main.go:141] libmachine: (addons-722117) Calling .Close
	I1209 23:45:45.737731  317576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.616868201s)
	W1209 23:45:45.737761  317576 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1209 23:45:45.737786  317576 retry.go:31] will retry after 195.193359ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1209 23:45:45.739951  317576 main.go:141] libmachine: (addons-722117) DBG | Closing plugin on server side
	I1209 23:45:45.739959  317576 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:45:45.739959  317576 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:45:45.739971  317576 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:45:45.739975  317576 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:45:45.739976  317576 main.go:141] libmachine: (addons-722117) DBG | Closing plugin on server side
	I1209 23:45:45.739980  317576 main.go:141] libmachine: Making call to close driver server
	I1209 23:45:45.739985  317576 main.go:141] libmachine: Making call to close driver server
	I1209 23:45:45.739988  317576 main.go:141] libmachine: (addons-722117) Calling .Close
	I1209 23:45:45.739993  317576 main.go:141] libmachine: (addons-722117) Calling .Close
	I1209 23:45:45.740001  317576 main.go:141] libmachine: (addons-722117) DBG | Closing plugin on server side
	I1209 23:45:45.740030  317576 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:45:45.740037  317576 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:45:45.740045  317576 main.go:141] libmachine: Making call to close driver server
	I1209 23:45:45.740052  317576 main.go:141] libmachine: (addons-722117) Calling .Close
	I1209 23:45:45.740053  317576 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:45:45.740061  317576 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:45:45.740068  317576 main.go:141] libmachine: Making call to close driver server
	I1209 23:45:45.740215  317576 main.go:141] libmachine: (addons-722117) Calling .Close
	I1209 23:45:45.740138  317576 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:45:45.740255  317576 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:45:45.740273  317576 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:45:45.740285  317576 addons.go:475] Verifying addon metrics-server=true in "addons-722117"
	I1209 23:45:45.740257  317576 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:45:45.740308  317576 main.go:141] libmachine: Making call to close driver server
	I1209 23:45:45.740316  317576 main.go:141] libmachine: (addons-722117) Calling .Close
	I1209 23:45:45.740337  317576 main.go:141] libmachine: (addons-722117) DBG | Closing plugin on server side
	I1209 23:45:45.740177  317576 main.go:141] libmachine: (addons-722117) DBG | Closing plugin on server side
	I1209 23:45:45.740362  317576 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:45:45.740369  317576 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:45:45.740378  317576 addons.go:475] Verifying addon registry=true in "addons-722117"
	I1209 23:45:45.740161  317576 main.go:141] libmachine: (addons-722117) DBG | Closing plugin on server side
	I1209 23:45:45.740604  317576 main.go:141] libmachine: (addons-722117) DBG | Closing plugin on server side
	I1209 23:45:45.740639  317576 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:45:45.740656  317576 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:45:45.741380  317576 main.go:141] libmachine: (addons-722117) DBG | Closing plugin on server side
	I1209 23:45:45.741419  317576 main.go:141] libmachine: (addons-722117) DBG | Closing plugin on server side
	I1209 23:45:45.741406  317576 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:45:45.741474  317576 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:45:45.741430  317576 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:45:45.741545  317576 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:45:45.742144  317576 out.go:177] * Verifying registry addon...
	I1209 23:45:45.743092  317576 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-722117 service yakd-dashboard -n yakd-dashboard
	
	I1209 23:45:45.744675  317576 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1209 23:45:45.780970  317576 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1209 23:45:45.781007  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:45.883135  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:45.933681  317576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 23:45:46.295056  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:46.357470  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:46.514598  317576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.249565237s)
	I1209 23:45:46.514667  317576 main.go:141] libmachine: Making call to close driver server
	I1209 23:45:46.514686  317576 main.go:141] libmachine: (addons-722117) Calling .Close
	I1209 23:45:46.514694  317576 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (6.269841888s)
	I1209 23:45:46.515059  317576 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:45:46.515077  317576 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:45:46.515091  317576 main.go:141] libmachine: (addons-722117) DBG | Closing plugin on server side
	I1209 23:45:46.515094  317576 main.go:141] libmachine: Making call to close driver server
	I1209 23:45:46.515148  317576 main.go:141] libmachine: (addons-722117) Calling .Close
	I1209 23:45:46.515406  317576 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:45:46.515426  317576 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:45:46.515438  317576 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-722117"
	I1209 23:45:46.516131  317576 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1209 23:45:46.517081  317576 out.go:177] * Verifying csi-hostpath-driver addon...
	I1209 23:45:46.518921  317576 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1209 23:45:46.519665  317576 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1209 23:45:46.520146  317576 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1209 23:45:46.520163  317576 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1209 23:45:46.542936  317576 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1209 23:45:46.542963  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:46.609833  317576 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1209 23:45:46.609860  317576 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1209 23:45:46.710700  317576 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1209 23:45:46.710731  317576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1209 23:45:46.765534  317576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1209 23:45:46.784805  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:46.820036  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:47.025790  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:47.177218  317576 pod_ready.go:103] pod "amd-gpu-device-plugin-j9jp7" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:47.248929  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:47.273409  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:47.525043  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:47.652328  317576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.718572056s)
	I1209 23:45:47.652404  317576 main.go:141] libmachine: Making call to close driver server
	I1209 23:45:47.652424  317576 main.go:141] libmachine: (addons-722117) Calling .Close
	I1209 23:45:47.652728  317576 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:45:47.652746  317576 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:45:47.652757  317576 main.go:141] libmachine: Making call to close driver server
	I1209 23:45:47.652765  317576 main.go:141] libmachine: (addons-722117) Calling .Close
	I1209 23:45:47.652995  317576 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:45:47.653030  317576 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:45:47.749093  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:47.773555  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:48.032629  317576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.267050228s)
	I1209 23:45:48.032689  317576 main.go:141] libmachine: Making call to close driver server
	I1209 23:45:48.032702  317576 main.go:141] libmachine: (addons-722117) Calling .Close
	I1209 23:45:48.033093  317576 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:45:48.033118  317576 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:45:48.033130  317576 main.go:141] libmachine: Making call to close driver server
	I1209 23:45:48.033138  317576 main.go:141] libmachine: (addons-722117) Calling .Close
	I1209 23:45:48.033156  317576 main.go:141] libmachine: (addons-722117) DBG | Closing plugin on server side
	I1209 23:45:48.033445  317576 main.go:141] libmachine: (addons-722117) DBG | Closing plugin on server side
	I1209 23:45:48.033511  317576 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:45:48.033525  317576 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:45:48.034480  317576 addons.go:475] Verifying addon gcp-auth=true in "addons-722117"
	I1209 23:45:48.036071  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:48.036133  317576 out.go:177] * Verifying gcp-auth addon...
	I1209 23:45:48.039231  317576 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1209 23:45:48.134208  317576 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1209 23:45:48.249898  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:48.274486  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:48.524150  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:48.751338  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:48.772465  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:49.025051  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:49.248790  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:49.272947  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:49.525534  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:49.657371  317576 pod_ready.go:103] pod "amd-gpu-device-plugin-j9jp7" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:49.947112  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:49.947422  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:50.025522  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:50.248797  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:50.274858  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:50.525335  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:50.748838  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:50.773119  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:51.025058  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:51.269055  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:51.273101  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:51.524828  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:51.658358  317576 pod_ready.go:103] pod "amd-gpu-device-plugin-j9jp7" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:51.748735  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:51.773838  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:52.025105  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:52.489107  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:52.489553  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:52.527247  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:52.751620  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:52.773875  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:53.024487  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:53.248502  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:53.274318  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:53.526235  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:53.749219  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:53.773516  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:54.024936  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:54.155705  317576 pod_ready.go:103] pod "amd-gpu-device-plugin-j9jp7" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:54.249709  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:54.273721  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:54.525672  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:54.748393  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:54.772788  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:55.025112  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:55.249050  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:55.273685  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:55.525280  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:55.749999  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:55.773571  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:56.024504  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:56.248973  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:56.272937  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:56.524777  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:56.655981  317576 pod_ready.go:103] pod "amd-gpu-device-plugin-j9jp7" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:56.749954  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:56.772846  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:57.024544  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:57.249517  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:57.273870  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:57.526437  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:57.749548  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:57.772936  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:58.024699  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:58.249953  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:58.277253  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:58.525513  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:58.657321  317576 pod_ready.go:93] pod "amd-gpu-device-plugin-j9jp7" in "kube-system" namespace has status "Ready":"True"
	I1209 23:45:58.657346  317576 pod_ready.go:82] duration metric: took 20.008285147s for pod "amd-gpu-device-plugin-j9jp7" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:58.657356  317576 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-p6bzg" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:58.659226  317576 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-p6bzg" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-p6bzg" not found
	I1209 23:45:58.659247  317576 pod_ready.go:82] duration metric: took 1.88498ms for pod "coredns-7c65d6cfc9-p6bzg" in "kube-system" namespace to be "Ready" ...
	E1209 23:45:58.659258  317576 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-p6bzg" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-p6bzg" not found
	I1209 23:45:58.659264  317576 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-w9cwn" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:58.663621  317576 pod_ready.go:93] pod "coredns-7c65d6cfc9-w9cwn" in "kube-system" namespace has status "Ready":"True"
	I1209 23:45:58.663640  317576 pod_ready.go:82] duration metric: took 4.37048ms for pod "coredns-7c65d6cfc9-w9cwn" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:58.663652  317576 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-722117" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:58.667864  317576 pod_ready.go:93] pod "etcd-addons-722117" in "kube-system" namespace has status "Ready":"True"
	I1209 23:45:58.667881  317576 pod_ready.go:82] duration metric: took 4.223319ms for pod "etcd-addons-722117" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:58.667889  317576 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-722117" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:58.672261  317576 pod_ready.go:93] pod "kube-apiserver-addons-722117" in "kube-system" namespace has status "Ready":"True"
	I1209 23:45:58.672278  317576 pod_ready.go:82] duration metric: took 4.383108ms for pod "kube-apiserver-addons-722117" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:58.672287  317576 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-722117" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:58.749305  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:58.774070  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:58.853392  317576 pod_ready.go:93] pod "kube-controller-manager-addons-722117" in "kube-system" namespace has status "Ready":"True"
	I1209 23:45:58.853417  317576 pod_ready.go:82] duration metric: took 181.123788ms for pod "kube-controller-manager-addons-722117" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:58.853430  317576 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vf896" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:59.025271  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:59.248663  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:59.252942  317576 pod_ready.go:93] pod "kube-proxy-vf896" in "kube-system" namespace has status "Ready":"True"
	I1209 23:45:59.252970  317576 pod_ready.go:82] duration metric: took 399.533173ms for pod "kube-proxy-vf896" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:59.252981  317576 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-722117" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:59.273374  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:59.525465  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:59.654027  317576 pod_ready.go:93] pod "kube-scheduler-addons-722117" in "kube-system" namespace has status "Ready":"True"
	I1209 23:45:59.654055  317576 pod_ready.go:82] duration metric: took 401.057225ms for pod "kube-scheduler-addons-722117" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:59.654065  317576 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-gm9q9" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:59.749193  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:59.772615  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:00.025015  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:00.054075  317576 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-gm9q9" in "kube-system" namespace has status "Ready":"True"
	I1209 23:46:00.054100  317576 pod_ready.go:82] duration metric: took 400.027879ms for pod "nvidia-device-plugin-daemonset-gm9q9" in "kube-system" namespace to be "Ready" ...
	I1209 23:46:00.054117  317576 pod_ready.go:39] duration metric: took 21.486402147s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:46:00.054135  317576 api_server.go:52] waiting for apiserver process to appear ...
	I1209 23:46:00.054189  317576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:46:00.072112  317576 api_server.go:72] duration metric: took 28.014784397s to wait for apiserver process to appear ...
	I1209 23:46:00.072149  317576 api_server.go:88] waiting for apiserver healthz status ...
	I1209 23:46:00.072184  317576 api_server.go:253] Checking apiserver healthz at https://192.168.39.28:8443/healthz ...
	I1209 23:46:00.077285  317576 api_server.go:279] https://192.168.39.28:8443/healthz returned 200:
	ok
	I1209 23:46:00.078238  317576 api_server.go:141] control plane version: v1.31.2
	I1209 23:46:00.078261  317576 api_server.go:131] duration metric: took 6.105421ms to wait for apiserver health ...
	I1209 23:46:00.078270  317576 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 23:46:00.249859  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:00.261249  317576 system_pods.go:59] 18 kube-system pods found
	I1209 23:46:00.261283  317576 system_pods.go:61] "amd-gpu-device-plugin-j9jp7" [d647263d-1626-4827-be6b-1b3c8d1e6f45] Running
	I1209 23:46:00.261288  317576 system_pods.go:61] "coredns-7c65d6cfc9-w9cwn" [a8090cda-46ff-4f0d-98a9-b797dddf77b0] Running
	I1209 23:46:00.261295  317576 system_pods.go:61] "csi-hostpath-attacher-0" [a8980358-4625-466a-a24b-a8b79316d2e3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1209 23:46:00.261300  317576 system_pods.go:61] "csi-hostpath-resizer-0" [8c4f526e-b63a-44b2-a79c-a38ac83935a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1209 23:46:00.261309  317576 system_pods.go:61] "csi-hostpathplugin-ts2kp" [a83914c2-a012-47f8-91cf-6d6f3014abf9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1209 23:46:00.261313  317576 system_pods.go:61] "etcd-addons-722117" [85735437-a94e-4d30-9152-ef29118be11d] Running
	I1209 23:46:00.261318  317576 system_pods.go:61] "kube-apiserver-addons-722117" [4162e4c9-87e2-4aaa-a24a-cce943e23314] Running
	I1209 23:46:00.261321  317576 system_pods.go:61] "kube-controller-manager-addons-722117" [4761331e-d786-4ab0-b4ac-d202615e22b4] Running
	I1209 23:46:00.261327  317576 system_pods.go:61] "kube-ingress-dns-minikube" [8f275f30-9507-437e-a59a-8028bb5cce1e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1209 23:46:00.261331  317576 system_pods.go:61] "kube-proxy-vf896" [76916604-287f-412d-b205-5733f7169da6] Running
	I1209 23:46:00.261335  317576 system_pods.go:61] "kube-scheduler-addons-722117" [768c5fd5-597d-4cff-8a3e-e9de92149d92] Running
	I1209 23:46:00.261339  317576 system_pods.go:61] "metrics-server-84c5f94fbc-6xkj4" [c1cd6238-412b-4017-974f-9c361334dfc5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 23:46:00.261348  317576 system_pods.go:61] "nvidia-device-plugin-daemonset-gm9q9" [c1b6b975-b701-4a2f-ae36-440e4446d946] Running
	I1209 23:46:00.261358  317576 system_pods.go:61] "registry-5cc95cd69-89n6d" [856777db-c8d8-4f9f-b52e-05d4d38090b7] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1209 23:46:00.261362  317576 system_pods.go:61] "registry-proxy-qjd4j" [02bc21df-ac00-4c3c-a980-e033abcac8f0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1209 23:46:00.261369  317576 system_pods.go:61] "snapshot-controller-56fcc65765-cm7q9" [6cce5c12-613a-4eca-af77-27777a666353] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 23:46:00.261377  317576 system_pods.go:61] "snapshot-controller-56fcc65765-twpq9" [3939ef0d-dbbc-43f2-bbd3-5161adaddce9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 23:46:00.261382  317576 system_pods.go:61] "storage-provisioner" [ee31f14f-b765-420b-80f4-90ef31a97236] Running
	I1209 23:46:00.261388  317576 system_pods.go:74] duration metric: took 183.11212ms to wait for pod list to return data ...
	I1209 23:46:00.261396  317576 default_sa.go:34] waiting for default service account to be created ...
	I1209 23:46:00.274947  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:00.454198  317576 default_sa.go:45] found service account: "default"
	I1209 23:46:00.454230  317576 default_sa.go:55] duration metric: took 192.827528ms for default service account to be created ...
	I1209 23:46:00.454240  317576 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 23:46:00.524690  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:00.658637  317576 system_pods.go:86] 18 kube-system pods found
	I1209 23:46:00.658671  317576 system_pods.go:89] "amd-gpu-device-plugin-j9jp7" [d647263d-1626-4827-be6b-1b3c8d1e6f45] Running
	I1209 23:46:00.658679  317576 system_pods.go:89] "coredns-7c65d6cfc9-w9cwn" [a8090cda-46ff-4f0d-98a9-b797dddf77b0] Running
	I1209 23:46:00.658686  317576 system_pods.go:89] "csi-hostpath-attacher-0" [a8980358-4625-466a-a24b-a8b79316d2e3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1209 23:46:00.658692  317576 system_pods.go:89] "csi-hostpath-resizer-0" [8c4f526e-b63a-44b2-a79c-a38ac83935a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1209 23:46:00.658699  317576 system_pods.go:89] "csi-hostpathplugin-ts2kp" [a83914c2-a012-47f8-91cf-6d6f3014abf9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1209 23:46:00.658703  317576 system_pods.go:89] "etcd-addons-722117" [85735437-a94e-4d30-9152-ef29118be11d] Running
	I1209 23:46:00.658707  317576 system_pods.go:89] "kube-apiserver-addons-722117" [4162e4c9-87e2-4aaa-a24a-cce943e23314] Running
	I1209 23:46:00.658711  317576 system_pods.go:89] "kube-controller-manager-addons-722117" [4761331e-d786-4ab0-b4ac-d202615e22b4] Running
	I1209 23:46:00.658716  317576 system_pods.go:89] "kube-ingress-dns-minikube" [8f275f30-9507-437e-a59a-8028bb5cce1e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1209 23:46:00.658719  317576 system_pods.go:89] "kube-proxy-vf896" [76916604-287f-412d-b205-5733f7169da6] Running
	I1209 23:46:00.658723  317576 system_pods.go:89] "kube-scheduler-addons-722117" [768c5fd5-597d-4cff-8a3e-e9de92149d92] Running
	I1209 23:46:00.658728  317576 system_pods.go:89] "metrics-server-84c5f94fbc-6xkj4" [c1cd6238-412b-4017-974f-9c361334dfc5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 23:46:00.658732  317576 system_pods.go:89] "nvidia-device-plugin-daemonset-gm9q9" [c1b6b975-b701-4a2f-ae36-440e4446d946] Running
	I1209 23:46:00.658736  317576 system_pods.go:89] "registry-5cc95cd69-89n6d" [856777db-c8d8-4f9f-b52e-05d4d38090b7] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1209 23:46:00.658741  317576 system_pods.go:89] "registry-proxy-qjd4j" [02bc21df-ac00-4c3c-a980-e033abcac8f0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1209 23:46:00.658747  317576 system_pods.go:89] "snapshot-controller-56fcc65765-cm7q9" [6cce5c12-613a-4eca-af77-27777a666353] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 23:46:00.658755  317576 system_pods.go:89] "snapshot-controller-56fcc65765-twpq9" [3939ef0d-dbbc-43f2-bbd3-5161adaddce9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 23:46:00.658758  317576 system_pods.go:89] "storage-provisioner" [ee31f14f-b765-420b-80f4-90ef31a97236] Running
	I1209 23:46:00.658766  317576 system_pods.go:126] duration metric: took 204.519933ms to wait for k8s-apps to be running ...
	I1209 23:46:00.658778  317576 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 23:46:00.658827  317576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 23:46:00.674384  317576 system_svc.go:56] duration metric: took 15.595974ms WaitForService to wait for kubelet
	I1209 23:46:00.674419  317576 kubeadm.go:582] duration metric: took 28.61710044s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 23:46:00.674440  317576 node_conditions.go:102] verifying NodePressure condition ...
	I1209 23:46:00.749277  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:00.772897  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:00.854554  317576 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 23:46:00.854584  317576 node_conditions.go:123] node cpu capacity is 2
	I1209 23:46:00.854613  317576 node_conditions.go:105] duration metric: took 180.167085ms to run NodePressure ...
	I1209 23:46:00.854626  317576 start.go:241] waiting for startup goroutines ...
	I1209 23:46:01.025319  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:01.248376  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:01.272497  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:01.525702  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:01.748979  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:01.774902  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:02.025740  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:02.248378  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:02.273337  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:02.526840  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:02.749106  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:02.773586  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:03.024933  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:03.249388  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:03.272873  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:03.527363  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:03.751624  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:03.774065  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:04.025199  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:04.248567  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:04.273386  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:04.524178  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:04.748262  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:04.772531  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:05.024156  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:05.247740  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:05.273284  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:05.525468  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:05.749210  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:05.773413  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:06.024344  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:06.248894  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:06.287060  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:06.526208  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:06.749373  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:06.773332  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:07.024261  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:07.249464  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:07.273993  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:07.524695  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:07.751832  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:07.773266  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:08.024113  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:08.248321  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:08.273010  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:08.524878  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:08.749057  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:08.773493  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:09.024077  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:09.248621  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:09.272928  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:09.525209  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:09.748830  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:09.774241  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:10.024896  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:10.248769  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:10.273090  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:10.524341  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:10.748871  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:10.773960  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:11.024623  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:11.249114  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:11.273491  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:11.525532  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:11.749127  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:11.773925  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:12.023971  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:12.398336  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:12.398535  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:12.524392  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:12.748864  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:12.774271  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:13.024360  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:13.248618  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:13.273281  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:13.526142  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:13.749444  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:13.772317  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:14.023923  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:14.248293  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:14.272744  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:14.525248  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:14.749867  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:14.774093  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:15.025030  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:15.248893  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:15.275045  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:15.525805  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:15.748774  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:15.774005  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:16.025625  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:16.249986  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:16.273371  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:16.523945  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:16.752383  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:16.773103  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:17.025483  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:17.248948  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:17.273316  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:17.524409  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:17.748899  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:17.773455  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:18.024676  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:18.250711  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:18.273902  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:18.525047  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:18.748274  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:18.772387  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:19.026760  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:19.248371  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:19.272464  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:19.525097  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:19.749258  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:19.773150  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:20.025724  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:20.248287  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:20.272700  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:20.524956  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:20.748423  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:20.773254  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:21.024437  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:21.248840  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:21.273271  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:21.524309  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:21.749500  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:21.772825  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:22.025907  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:22.248165  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:22.272123  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:22.523559  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:22.749398  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:22.773370  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:23.024142  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:23.249958  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:23.272938  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:23.526782  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:23.748893  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:23.773889  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:24.025385  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:24.248860  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:24.273274  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:24.524901  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:24.748464  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:24.772739  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:25.024436  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:25.248474  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:25.272842  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:25.525205  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:25.748853  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:25.773334  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:26.023976  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:26.248932  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:46:26.273225  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:26.524969  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:26.748492  317576 kapi.go:107] duration metric: took 41.003812425s to wait for kubernetes.io/minikube-addons=registry ...
	I1209 23:46:26.777352  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:27.023836  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:27.272927  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:27.527390  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:27.773050  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:28.024654  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:28.273681  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:28.524066  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:28.775978  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:29.028159  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:29.272777  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:29.527669  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:29.773235  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:30.024042  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:30.273274  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:30.524469  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:30.772952  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:31.025405  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:31.273322  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:31.525328  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:31.775913  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:32.030696  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:32.273461  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:32.525239  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:32.774486  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:33.025937  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:33.273577  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:33.524482  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:33.772978  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:34.025562  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:34.275690  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:34.524161  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:34.774114  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:35.024810  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:35.272549  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:35.524845  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:35.774668  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:36.026547  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:36.272659  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:36.524733  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:36.774077  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:37.024753  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:37.272726  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:37.525016  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:37.773626  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:38.024679  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:38.272722  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:38.524281  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:38.773375  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:39.024372  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:39.274627  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:39.524217  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:39.773568  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:40.024218  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:40.273705  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:40.524747  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:40.773459  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:41.024528  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:41.274049  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:41.524340  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:41.773634  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:42.024054  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:42.272898  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:42.525313  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:42.773114  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:43.024843  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:43.273225  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:43.525362  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:43.774054  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:44.024952  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:44.272641  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:44.524781  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:44.773769  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:45.023901  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:45.273657  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:45.526377  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:45.773412  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:46.024511  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:46.273195  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:46.524868  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:46.773067  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:47.024777  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:47.273536  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:47.525392  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:47.773770  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:48.024529  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:48.274120  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:48.524217  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:48.773736  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:49.024480  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:49.273554  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:49.524736  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:49.775483  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:50.024314  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:50.272294  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:50.524187  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:50.773083  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:51.024986  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:51.272803  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:51.524187  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:51.773708  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:52.024854  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:52.273701  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:52.524802  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:52.774142  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:53.025264  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:53.274115  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:53.525681  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:53.773208  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:54.025265  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:54.272555  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:54.524376  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:54.773187  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:55.024752  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:55.273761  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:55.525246  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:55.772933  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:56.024674  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:56.272525  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:56.524865  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:56.774592  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:57.025299  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:57.273001  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:57.525297  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:57.773335  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:58.024446  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:58.273470  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:58.524386  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:58.773095  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:59.024552  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:59.273295  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:46:59.524209  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:46:59.774326  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:00.024209  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:00.273447  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:00.526228  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:00.773885  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:01.025299  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:01.272865  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:01.524850  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:01.773777  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:02.025172  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:02.273108  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:02.524640  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:02.773271  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:03.024746  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:03.273476  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:03.524592  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:03.773980  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:04.025113  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:04.273393  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:04.524097  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:04.772775  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:05.024488  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:05.273533  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:05.524089  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:05.894909  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:06.028903  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:06.273550  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:06.524441  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:06.774305  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:07.024666  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:07.273845  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:07.525172  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:07.773299  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:08.024258  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:08.273560  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:08.524647  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:08.773323  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:09.024075  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:09.272850  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:09.524767  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:09.773430  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:10.191595  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:10.273977  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:10.525033  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:10.774243  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:11.024676  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:11.273377  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:11.524266  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:11.773082  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:12.025047  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:12.272955  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:12.524592  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:12.773799  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:13.025573  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:13.273202  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:13.524785  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:13.772855  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:14.025309  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:14.272981  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:14.524995  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:14.772542  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:15.025077  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:15.272532  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:15.526547  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:15.779959  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:16.025238  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:16.273667  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:16.526014  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:16.773941  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:17.024965  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:17.273257  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:17.525562  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:17.773265  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:18.025161  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:18.272983  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:18.524154  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:18.773822  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:19.024302  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:19.272841  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:19.524877  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:19.773199  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:20.025218  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:20.274750  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:20.524543  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:20.773664  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:21.024699  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:21.272844  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:21.525911  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:21.773159  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:22.038163  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:22.274375  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:22.524394  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:22.775174  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:23.025272  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:23.274154  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:23.529241  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:23.774200  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:24.033767  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:24.296527  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:24.524238  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:24.775912  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:25.024016  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:25.277082  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:25.524976  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:25.774343  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:26.023841  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:26.273345  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:26.524801  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:26.773967  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:27.024506  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:27.273628  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:27.524939  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:27.772975  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:28.025856  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:28.273206  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:28.523868  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:28.773189  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:29.024948  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:29.272810  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:29.525024  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:29.773893  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:30.024712  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:30.275182  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:30.556162  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:30.774133  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:31.026004  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:31.273634  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:31.524933  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:31.773966  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:32.024375  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:32.272998  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:32.526173  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:32.774896  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:33.024023  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:33.273423  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:33.526071  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:33.773700  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:34.024822  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:34.276195  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:34.525541  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:34.773783  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:35.024740  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:35.273242  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:35.526242  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:35.773372  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:36.026074  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:36.272818  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:36.525582  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:36.774620  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:37.024044  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:37.272504  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:37.524937  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:37.773518  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:38.023907  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:38.273363  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:38.524697  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:38.774521  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:39.026535  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:39.274043  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:39.524196  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:39.774233  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:40.027533  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:40.273571  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:40.525043  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:40.776699  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:41.027070  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:41.272674  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:41.525070  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:41.777995  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:42.025685  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:42.276228  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:42.526279  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:42.773234  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:43.024783  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:43.274122  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:43.524841  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:43.775157  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:44.024828  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:44.273106  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:44.525700  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:44.772822  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:45.024305  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:45.273353  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:45.525728  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:45.774533  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:46.026335  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:46.272978  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:46.525177  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:46.783656  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:47.026981  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:47.273786  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:47.524863  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:47.775970  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:48.024108  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:48.272829  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:48.530557  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:48.772583  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:49.024355  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:49.274522  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:49.524552  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:49.774707  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:50.024675  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:50.277745  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:50.524890  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:47:50.778372  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:51.025984  317576 kapi.go:107] duration metric: took 2m4.50631253s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1209 23:47:51.273835  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:51.773539  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:52.273791  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:52.774148  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:53.274485  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:53.773505  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:54.273003  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:54.773146  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:55.274083  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:55.773485  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:56.273576  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:56.774212  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:57.273743  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:57.777760  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:58.273240  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:58.773340  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:59.272677  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:47:59.773393  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:48:00.273372  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:48:00.774103  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:48:01.273283  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:48:01.772992  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:48:02.273418  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:48:02.773009  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:48:03.274295  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:48:03.773407  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:48:04.273279  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:48:04.773724  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:48:05.273358  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:48:05.773381  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:48:06.273019  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:48:06.779506  317576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:48:07.273084  317576 kapi.go:107] duration metric: took 2m24.504461275s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1209 23:48:33.057706  317576 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1209 23:48:33.057744  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:33.542648  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:34.043969  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:34.543731  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:35.043593  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:35.542904  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:36.043844  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:36.543892  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:37.043963  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:37.544292  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:38.042980  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:38.543879  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:39.043671  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:39.543931  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:40.043516  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:40.543402  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:41.043124  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:41.543751  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:42.043464  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:42.542912  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:43.044668  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:43.543306  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:44.042758  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:44.544026  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:45.044024  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:45.543928  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:46.044536  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:46.542845  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:47.044194  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:47.544160  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:48.042666  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:48.543617  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:49.043330  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:49.543133  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:50.043696  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:50.543192  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:51.043111  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:51.542456  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:52.045582  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:52.543532  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:53.042897  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:53.544009  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:54.044488  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:54.543410  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:55.043177  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:55.551185  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:56.043448  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:56.543154  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:57.042878  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:57.544345  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:58.043376  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:58.543863  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:59.043701  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:48:59.543346  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:49:00.043499  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:49:00.544239  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:49:01.043222  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:49:01.543379  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:49:02.043116  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:49:02.542868  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:49:03.043397  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:49:03.544107  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:49:04.043953  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:49:04.543943  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:49:05.043558  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:49:05.543117  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:49:06.044305  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:49:06.542752  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:49:07.044147  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:49:07.544434  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:49:08.043235  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:49:08.544124  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:49:09.044228  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:49:09.544795  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:49:10.042921  317576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:49:10.543992  317576 kapi.go:107] duration metric: took 3m22.504757181s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1209 23:49:10.546098  317576 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-722117 cluster.
	I1209 23:49:10.547421  317576 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1209 23:49:10.548604  317576 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1209 23:49:10.550034  317576 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, default-storageclass, ingress-dns, amd-gpu-device-plugin, storage-provisioner, storage-provisioner-rancher, metrics-server, volcano, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1209 23:49:10.551122  317576 addons.go:510] duration metric: took 3m38.49389899s for enable addons: enabled=[nvidia-device-plugin cloud-spanner default-storageclass ingress-dns amd-gpu-device-plugin storage-provisioner storage-provisioner-rancher metrics-server volcano inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1209 23:49:10.551171  317576 start.go:246] waiting for cluster config update ...
	I1209 23:49:10.551192  317576 start.go:255] writing updated cluster config ...
	I1209 23:49:10.551480  317576 ssh_runner.go:195] Run: rm -f paused
	I1209 23:49:10.607097  317576 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 23:49:10.609212  317576 out.go:177] * Done! kubectl is now configured to use "addons-722117" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	37792dede0a57       56cc512116c8f       About a minute ago   Running             busybox                                  0                   51cd981a1080e       busybox
	49351ca64129e       ee44bc2368033       3 minutes ago        Running             controller                               0                   19b73b96be0ad       ingress-nginx-controller-5f85ff4588-xqhhc
	2bce60a4a60b2       738351fd438f0       3 minutes ago        Running             csi-snapshotter                          0                   f916266f920fa       csi-hostpathplugin-ts2kp
	cdfd7e0845f25       931dbfd16f87c       3 minutes ago        Running             csi-provisioner                          0                   f916266f920fa       csi-hostpathplugin-ts2kp
	9869ce2190a61       e899260153aed       3 minutes ago        Running             liveness-probe                           0                   f916266f920fa       csi-hostpathplugin-ts2kp
	dd4649bb6f53b       e255e073c508c       3 minutes ago        Running             hostpath                                 0                   f916266f920fa       csi-hostpathplugin-ts2kp
	95abc6a6a6357       88ef14a257f42       3 minutes ago        Running             node-driver-registrar                    0                   f916266f920fa       csi-hostpathplugin-ts2kp
	e95471259850a       59cbb42146a37       3 minutes ago        Running             csi-attacher                             0                   df96c230a3569       csi-hostpath-attacher-0
	40141f1a00fc8       19a639eda60f0       3 minutes ago        Running             csi-resizer                              0                   04f704c77c0b7       csi-hostpath-resizer-0
	740a59e17e6c2       a62eeff05ba51       3 minutes ago        Exited              patch                                    2                   7ff796d552f24       ingress-nginx-admission-patch-7c6qq
	8b51616b68466       a1ed5895ba635       3 minutes ago        Running             csi-external-health-monitor-controller   0                   f916266f920fa       csi-hostpathplugin-ts2kp
	b4e40831f40bb       a62eeff05ba51       3 minutes ago        Exited              create                                   0                   e199a4f41dd5f       ingress-nginx-admission-create-gjcwh
	7316c00c45499       aa61ee9c70bc4       4 minutes ago        Running             volume-snapshot-controller               0                   32c201f78e62a       snapshot-controller-56fcc65765-twpq9
	497ca607054bc       aa61ee9c70bc4       4 minutes ago        Running             volume-snapshot-controller               0                   e60f5fbf72840       snapshot-controller-56fcc65765-cm7q9
	7954209eae185       71a13057a4a46       4 minutes ago        Running             gadget                                   0                   499df415df508       gadget-x8nnc
	8b9747f01b1e8       e16d1e3a10667       5 minutes ago        Running             local-path-provisioner                   0                   9c27fa391acf7       local-path-provisioner-86d989889c-tvvgq
	8e452125dbe81       9df718d81010e       5 minutes ago        Running             registry-proxy                           0                   5c1ff905be1c8       registry-proxy-qjd4j
	f59c90eda6b41       c18a86d35e983       5 minutes ago        Running             registry                                 0                   2c4beff4bfc33       registry-5cc95cd69-89n6d
	74a89d2461f8e       30dd67412fdea       5 minutes ago        Running             minikube-ingress-dns                     0                   88a90d3f299d2       kube-ingress-dns-minikube
	cc7caa99868e8       d5e667c0f2bb6       5 minutes ago        Running             amd-gpu-device-plugin                    0                   383599d5a3c4f       amd-gpu-device-plugin-j9jp7
	a35a729e68cff       6e38f40d628db       5 minutes ago        Running             storage-provisioner                      0                   9482b0c4511f5       storage-provisioner
	5c87e29dc4ada       c69fa2e9cbf5f       5 minutes ago        Running             coredns                                  0                   2c4b50f90f04b       coredns-7c65d6cfc9-w9cwn
	7d1d26993435e       505d571f5fd56       5 minutes ago        Running             kube-proxy                               0                   e55af0e02ad63       kube-proxy-vf896
	78d7a5e6ce278       2e96e5913fc06       6 minutes ago        Running             etcd                                     0                   6a1cfca51efb7       etcd-addons-722117
	d4695effb3cbc       9499c9960544e       6 minutes ago        Running             kube-apiserver                           0                   052f08644f287       kube-apiserver-addons-722117
	e79423ec06037       0486b6c53a1b5       6 minutes ago        Running             kube-controller-manager                  0                   5f86edb4d9f6e       kube-controller-manager-addons-722117
	4619951816670       847c7bc1a5418       6 minutes ago        Running             kube-scheduler                           0                   e7dc9e124f479       kube-scheduler-addons-722117
	
	
	==> containerd <==
	Dec 09 23:50:36 addons-722117 containerd[643]: time="2024-12-09T23:50:36.386299100Z" level=info msg="RemoveContainer for \"7e31a1fd68928beead31b6fa411810dc58bd91a05e973212ca53f5ec013ab69e\""
	Dec 09 23:50:36 addons-722117 containerd[643]: time="2024-12-09T23:50:36.397184439Z" level=info msg="RemoveContainer for \"7e31a1fd68928beead31b6fa411810dc58bd91a05e973212ca53f5ec013ab69e\" returns successfully"
	Dec 09 23:50:36 addons-722117 containerd[643]: time="2024-12-09T23:50:36.398479060Z" level=error msg="ContainerStatus for \"7e31a1fd68928beead31b6fa411810dc58bd91a05e973212ca53f5ec013ab69e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7e31a1fd68928beead31b6fa411810dc58bd91a05e973212ca53f5ec013ab69e\": not found"
	Dec 09 23:50:47 addons-722117 containerd[643]: time="2024-12-09T23:50:47.365482940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:task-pv-pod,Uid:fed7ec4a-1bd8-4a30-9b5f-6197ef725f24,Namespace:default,Attempt:0,}"
	Dec 09 23:50:47 addons-722117 containerd[643]: time="2024-12-09T23:50:47.513681788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 09 23:50:47 addons-722117 containerd[643]: time="2024-12-09T23:50:47.513848317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 09 23:50:47 addons-722117 containerd[643]: time="2024-12-09T23:50:47.513901010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 09 23:50:47 addons-722117 containerd[643]: time="2024-12-09T23:50:47.514365740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 09 23:50:47 addons-722117 containerd[643]: time="2024-12-09T23:50:47.606657928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:task-pv-pod,Uid:fed7ec4a-1bd8-4a30-9b5f-6197ef725f24,Namespace:default,Attempt:0,} returns sandbox id \"a5e59eb49356c73af3ac0cfa228251671931dc64838614947b1d287107974ce7\""
	Dec 09 23:51:28 addons-722117 containerd[643]: time="2024-12-09T23:51:28.786146404Z" level=info msg="StopPodSandbox for \"3a6242931245b34ac88ce253d0a23b79b4fd49444ec13c98e496ef86206dd5f3\""
	Dec 09 23:51:28 addons-722117 containerd[643]: time="2024-12-09T23:51:28.818088746Z" level=info msg="TearDown network for sandbox \"3a6242931245b34ac88ce253d0a23b79b4fd49444ec13c98e496ef86206dd5f3\" successfully"
	Dec 09 23:51:28 addons-722117 containerd[643]: time="2024-12-09T23:51:28.818141570Z" level=info msg="StopPodSandbox for \"3a6242931245b34ac88ce253d0a23b79b4fd49444ec13c98e496ef86206dd5f3\" returns successfully"
	Dec 09 23:51:28 addons-722117 containerd[643]: time="2024-12-09T23:51:28.819137531Z" level=info msg="RemovePodSandbox for \"3a6242931245b34ac88ce253d0a23b79b4fd49444ec13c98e496ef86206dd5f3\""
	Dec 09 23:51:28 addons-722117 containerd[643]: time="2024-12-09T23:51:28.819468837Z" level=info msg="Forcibly stopping sandbox \"3a6242931245b34ac88ce253d0a23b79b4fd49444ec13c98e496ef86206dd5f3\""
	Dec 09 23:51:28 addons-722117 containerd[643]: time="2024-12-09T23:51:28.848630591Z" level=info msg="TearDown network for sandbox \"3a6242931245b34ac88ce253d0a23b79b4fd49444ec13c98e496ef86206dd5f3\" successfully"
	Dec 09 23:51:28 addons-722117 containerd[643]: time="2024-12-09T23:51:28.857936636Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3a6242931245b34ac88ce253d0a23b79b4fd49444ec13c98e496ef86206dd5f3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Dec 09 23:51:28 addons-722117 containerd[643]: time="2024-12-09T23:51:28.858133471Z" level=info msg="RemovePodSandbox \"3a6242931245b34ac88ce253d0a23b79b4fd49444ec13c98e496ef86206dd5f3\" returns successfully"
	Dec 09 23:51:28 addons-722117 containerd[643]: time="2024-12-09T23:51:28.859750301Z" level=info msg="StopPodSandbox for \"50dd5c7a81825903ccc781635d770f8f8decf88ac83751d649983ac10d2dbff2\""
	Dec 09 23:51:28 addons-722117 containerd[643]: time="2024-12-09T23:51:28.892526715Z" level=info msg="TearDown network for sandbox \"50dd5c7a81825903ccc781635d770f8f8decf88ac83751d649983ac10d2dbff2\" successfully"
	Dec 09 23:51:28 addons-722117 containerd[643]: time="2024-12-09T23:51:28.892634406Z" level=info msg="StopPodSandbox for \"50dd5c7a81825903ccc781635d770f8f8decf88ac83751d649983ac10d2dbff2\" returns successfully"
	Dec 09 23:51:28 addons-722117 containerd[643]: time="2024-12-09T23:51:28.893166341Z" level=info msg="RemovePodSandbox for \"50dd5c7a81825903ccc781635d770f8f8decf88ac83751d649983ac10d2dbff2\""
	Dec 09 23:51:28 addons-722117 containerd[643]: time="2024-12-09T23:51:28.893219056Z" level=info msg="Forcibly stopping sandbox \"50dd5c7a81825903ccc781635d770f8f8decf88ac83751d649983ac10d2dbff2\""
	Dec 09 23:51:28 addons-722117 containerd[643]: time="2024-12-09T23:51:28.942047911Z" level=info msg="TearDown network for sandbox \"50dd5c7a81825903ccc781635d770f8f8decf88ac83751d649983ac10d2dbff2\" successfully"
	Dec 09 23:51:28 addons-722117 containerd[643]: time="2024-12-09T23:51:28.950828485Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"50dd5c7a81825903ccc781635d770f8f8decf88ac83751d649983ac10d2dbff2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Dec 09 23:51:28 addons-722117 containerd[643]: time="2024-12-09T23:51:28.950939901Z" level=info msg="RemovePodSandbox \"50dd5c7a81825903ccc781635d770f8f8decf88ac83751d649983ac10d2dbff2\" returns successfully"
	
	
	==> coredns [5c87e29dc4adab7fc26f790492c0a01b2604d2fb0e9cde09d33d11799d153c60] <==
	[INFO] 127.0.0.1:57008 - 57910 "HINFO IN 3670311461197461873.5399790846267928933. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008918028s
	[INFO] 10.244.0.7:58096 - 1319 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000522927s
	[INFO] 10.244.0.7:58096 - 61230 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000611s
	[INFO] 10.244.0.7:58096 - 59503 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000209895s
	[INFO] 10.244.0.7:58096 - 15311 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000221006s
	[INFO] 10.244.0.7:58096 - 26269 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000070778s
	[INFO] 10.244.0.7:58096 - 14589 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000167671s
	[INFO] 10.244.0.7:58096 - 54376 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000243761s
	[INFO] 10.244.0.7:58096 - 45570 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000118401s
	[INFO] 10.244.0.7:58189 - 7959 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000072801s
	[INFO] 10.244.0.7:58189 - 7607 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000032653s
	[INFO] 10.244.0.7:60109 - 34246 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000085123s
	[INFO] 10.244.0.7:60109 - 34482 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00003691s
	[INFO] 10.244.0.7:50395 - 36585 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000058944s
	[INFO] 10.244.0.7:50395 - 36801 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000080032s
	[INFO] 10.244.0.7:47340 - 58573 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000134719s
	[INFO] 10.244.0.7:47340 - 58750 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00033851s
	[INFO] 10.244.0.27:49796 - 59121 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000826545s
	[INFO] 10.244.0.27:59686 - 34600 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000141968s
	[INFO] 10.244.0.27:58796 - 32994 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000103678s
	[INFO] 10.244.0.27:40879 - 23189 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000152028s
	[INFO] 10.244.0.27:43279 - 35731 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000081613s
	[INFO] 10.244.0.27:57936 - 55654 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.002519835s
	[INFO] 10.244.0.27:49182 - 1309 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 154 0.004468361s
	[INFO] 10.244.0.27:42133 - 9878 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 190 0.006295981s
	
	
	==> describe nodes <==
	Name:               addons-722117
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-722117
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9
	                    minikube.k8s.io/name=addons-722117
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_09T23_45_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-722117
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-722117"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 23:45:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-722117
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 23:51:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 23:50:35 +0000   Mon, 09 Dec 2024 23:45:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 23:50:35 +0000   Mon, 09 Dec 2024 23:45:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 23:50:35 +0000   Mon, 09 Dec 2024 23:45:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 23:50:35 +0000   Mon, 09 Dec 2024 23:45:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.28
	  Hostname:    addons-722117
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 eba9f1570dac4a67b7b92e2e2d09650e
	  System UUID:                eba9f157-0dac-4a67-b7b9-2e2e2d09650e
	  Boot ID:                    d0d3822b-7acd-40cb-b60f-9075fc13bdbb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (24 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  default                     registry-test                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	  gadget                      gadget-x8nnc                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m50s
	  headlamp                    headlamp-cd8ffd6fc-tq6vq                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-xqhhc                     100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m47s
	  kube-system                 amd-gpu-device-plugin-j9jp7                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  kube-system                 coredns-7c65d6cfc9-w9cwn                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m57s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m44s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m43s
	  kube-system                 csi-hostpathplugin-ts2kp                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m43s
	  kube-system                 etcd-addons-722117                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         6m3s
	  kube-system                 kube-apiserver-addons-722117                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 kube-controller-manager-addons-722117                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 kube-proxy-vf896                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  kube-system                 kube-scheduler-addons-722117                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 registry-5cc95cd69-89n6d                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 registry-proxy-qjd4j                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 snapshot-controller-56fcc65765-cm7q9                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m48s
	  kube-system                 snapshot-controller-56fcc65765-twpq9                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m48s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  local-path-storage          helper-pod-create-pvc-0ac36911-bd40-4f0e-adc0-0f4ef8b32d5b    0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  local-path-storage          local-path-provisioner-86d989889c-tvvgq                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  6m8s (x8 over 6m8s)  kubelet          Node addons-722117 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m8s (x8 over 6m8s)  kubelet          Node addons-722117 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m8s (x7 over 6m8s)  kubelet          Node addons-722117 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m2s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m2s                 kubelet          Node addons-722117 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m2s                 kubelet          Node addons-722117 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m2s                 kubelet          Node addons-722117 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m1s                 kubelet          Node addons-722117 status is now: NodeReady
	  Normal  RegisteredNode           5m58s                node-controller  Node addons-722117 event: Registered Node addons-722117 in Controller
	
	
	==> dmesg <==
	[  +5.653541] systemd-fstab-generator[1173]: Ignoring "noauto" option for root device
	[  +0.092638] kauditd_printk_skb: 41 callbacks suppressed
	[  +4.702088] systemd-fstab-generator[1272]: Ignoring "noauto" option for root device
	[  +0.891733] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.017101] kauditd_printk_skb: 109 callbacks suppressed
	[  +5.138848] kauditd_printk_skb: 111 callbacks suppressed
	[  +5.271053] kauditd_printk_skb: 114 callbacks suppressed
	[Dec 9 23:46] kauditd_printk_skb: 2 callbacks suppressed
	[Dec 9 23:47] kauditd_printk_skb: 22 callbacks suppressed
	[  +8.780998] kauditd_printk_skb: 19 callbacks suppressed
	[  +8.306570] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.723378] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.935410] kauditd_printk_skb: 12 callbacks suppressed
	[Dec 9 23:48] kauditd_printk_skb: 22 callbacks suppressed
	[ +13.959820] kauditd_printk_skb: 6 callbacks suppressed
	[Dec 9 23:49] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.273061] kauditd_printk_skb: 7 callbacks suppressed
	[ +26.880613] kauditd_printk_skb: 2 callbacks suppressed
	[ +19.080421] kauditd_printk_skb: 20 callbacks suppressed
	[Dec 9 23:50] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.051256] kauditd_printk_skb: 3 callbacks suppressed
	[  +7.133874] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.896490] kauditd_printk_skb: 24 callbacks suppressed
	[  +6.955624] kauditd_printk_skb: 14 callbacks suppressed
	[ +11.869239] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [78d7a5e6ce278f8ea1b04742831da4da280051c0bd6f2912c2a3396696cbbf8f] <==
	{"level":"warn","ts":"2024-12-09T23:45:52.475125Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"333.918246ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/amd-gpu-device-plugin-j9jp7\" ","response":"range_response_count:1 size:4340"}
	{"level":"info","ts":"2024-12-09T23:45:52.475163Z","caller":"traceutil/trace.go:171","msg":"trace[253028469] range","detail":"{range_begin:/registry/pods/kube-system/amd-gpu-device-plugin-j9jp7; range_end:; response_count:1; response_revision:994; }","duration":"333.956988ms","start":"2024-12-09T23:45:52.141201Z","end":"2024-12-09T23:45:52.475158Z","steps":["trace[253028469] 'agreement among raft nodes before linearized reading'  (duration: 333.895174ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:45:52.475181Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T23:45:52.141167Z","time spent":"334.009064ms","remote":"127.0.0.1:54254","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":1,"response size":4363,"request content":"key:\"/registry/pods/kube-system/amd-gpu-device-plugin-j9jp7\" "}
	{"level":"info","ts":"2024-12-09T23:46:12.383952Z","caller":"traceutil/trace.go:171","msg":"trace[1330057931] linearizableReadLoop","detail":"{readStateIndex:1070; appliedIndex:1069; }","duration":"157.881417ms","start":"2024-12-09T23:46:12.226006Z","end":"2024-12-09T23:46:12.383888Z","steps":["trace[1330057931] 'read index received'  (duration: 152.145864ms)","trace[1330057931] 'applied index is now lower than readState.Index'  (duration: 5.734766ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-09T23:46:12.384187Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.162139ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-12-09T23:46:12.384220Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"148.81559ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T23:46:12.384364Z","caller":"traceutil/trace.go:171","msg":"trace[1547048080] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1044; }","duration":"148.973351ms","start":"2024-12-09T23:46:12.235341Z","end":"2024-12-09T23:46:12.384315Z","steps":["trace[1547048080] 'agreement among raft nodes before linearized reading'  (duration: 148.786032ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:46:12.384437Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.649536ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T23:46:12.384479Z","caller":"traceutil/trace.go:171","msg":"trace[1630581327] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1044; }","duration":"124.694814ms","start":"2024-12-09T23:46:12.259778Z","end":"2024-12-09T23:46:12.384473Z","steps":["trace[1630581327] 'agreement among raft nodes before linearized reading'  (duration: 124.640795ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:46:12.384286Z","caller":"traceutil/trace.go:171","msg":"trace[933961020] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1044; }","duration":"158.275178ms","start":"2024-12-09T23:46:12.226002Z","end":"2024-12-09T23:46:12.384277Z","steps":["trace[933961020] 'agreement among raft nodes before linearized reading'  (duration: 158.029021ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:46:29.217824Z","caller":"traceutil/trace.go:171","msg":"trace[1239604434] transaction","detail":"{read_only:false; response_revision:1086; number_of_response:1; }","duration":"107.911997ms","start":"2024-12-09T23:46:29.109888Z","end":"2024-12-09T23:46:29.217800Z","steps":["trace[1239604434] 'process raft request'  (duration: 107.641839ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:47:05.874666Z","caller":"traceutil/trace.go:171","msg":"trace[939028665] linearizableReadLoop","detail":"{readStateIndex:1175; appliedIndex:1174; }","duration":"117.764423ms","start":"2024-12-09T23:47:05.756886Z","end":"2024-12-09T23:47:05.874650Z","steps":["trace[939028665] 'read index received'  (duration: 117.547187ms)","trace[939028665] 'applied index is now lower than readState.Index'  (duration: 216.777µs)"],"step_count":2}
	{"level":"warn","ts":"2024-12-09T23:47:05.874873Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.98464ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T23:47:05.874921Z","caller":"traceutil/trace.go:171","msg":"trace[692138318] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1138; }","duration":"118.054021ms","start":"2024-12-09T23:47:05.756859Z","end":"2024-12-09T23:47:05.874913Z","steps":["trace[692138318] 'agreement among raft nodes before linearized reading'  (duration: 117.934585ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:47:05.875246Z","caller":"traceutil/trace.go:171","msg":"trace[736778298] transaction","detail":"{read_only:false; response_revision:1138; number_of_response:1; }","duration":"172.248469ms","start":"2024-12-09T23:47:05.702988Z","end":"2024-12-09T23:47:05.875236Z","steps":["trace[736778298] 'process raft request'  (duration: 171.500561ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:47:10.167979Z","caller":"traceutil/trace.go:171","msg":"trace[340053953] linearizableReadLoop","detail":"{readStateIndex:1181; appliedIndex:1180; }","duration":"160.959383ms","start":"2024-12-09T23:47:10.007004Z","end":"2024-12-09T23:47:10.167964Z","steps":["trace[340053953] 'read index received'  (duration: 157.917567ms)","trace[340053953] 'applied index is now lower than readState.Index'  (duration: 3.040529ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-09T23:47:10.168302Z","caller":"traceutil/trace.go:171","msg":"trace[1023503799] transaction","detail":"{read_only:false; response_revision:1143; number_of_response:1; }","duration":"168.137768ms","start":"2024-12-09T23:47:10.000151Z","end":"2024-12-09T23:47:10.168288Z","steps":["trace[1023503799] 'process raft request'  (duration: 165.718836ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:47:10.168466Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.446396ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T23:47:10.168536Z","caller":"traceutil/trace.go:171","msg":"trace[2068072303] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1143; }","duration":"161.501408ms","start":"2024-12-09T23:47:10.007000Z","end":"2024-12-09T23:47:10.168502Z","steps":["trace[2068072303] 'agreement among raft nodes before linearized reading'  (duration: 161.431676ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:47:10.168675Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.613022ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T23:47:10.168755Z","caller":"traceutil/trace.go:171","msg":"trace[296464408] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1143; }","duration":"141.700164ms","start":"2024-12-09T23:47:10.027047Z","end":"2024-12-09T23:47:10.168747Z","steps":["trace[296464408] 'agreement among raft nodes before linearized reading'  (duration: 141.592899ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:47:24.273375Z","caller":"traceutil/trace.go:171","msg":"trace[1988923661] linearizableReadLoop","detail":"{readStateIndex:1225; appliedIndex:1224; }","duration":"247.166632ms","start":"2024-12-09T23:47:24.026192Z","end":"2024-12-09T23:47:24.273358Z","steps":["trace[1988923661] 'read index received'  (duration: 246.982286ms)","trace[1988923661] 'applied index is now lower than readState.Index'  (duration: 183.892µs)"],"step_count":2}
	{"level":"info","ts":"2024-12-09T23:47:24.273498Z","caller":"traceutil/trace.go:171","msg":"trace[1414201139] transaction","detail":"{read_only:false; response_revision:1184; number_of_response:1; }","duration":"261.4369ms","start":"2024-12-09T23:47:24.012054Z","end":"2024-12-09T23:47:24.273491Z","steps":["trace[1414201139] 'process raft request'  (duration: 261.167943ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:47:24.273631Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"247.425535ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T23:47:24.273676Z","caller":"traceutil/trace.go:171","msg":"trace[279380705] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1184; }","duration":"247.479838ms","start":"2024-12-09T23:47:24.026187Z","end":"2024-12-09T23:47:24.273667Z","steps":["trace[279380705] 'agreement among raft nodes before linearized reading'  (duration: 247.409708ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:51:29 up 6 min,  0 users,  load average: 0.58, 1.06, 0.64
	Linux addons-722117 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d4695effb3cbca3275ce1d49a48118f0995596448075438e1b2b3855febd78f4] <==
	E1209 23:48:51.037976       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.60.192:443: connect: connection refused" logger="UnhandledError"
	W1209 23:48:51.109185       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.60.192:443: connect: connection refused
	E1209 23:48:51.109213       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.60.192:443: connect: connection refused" logger="UnhandledError"
	I1209 23:49:26.898158       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I1209 23:49:26.934162       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I1209 23:49:44.836310       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I1209 23:49:44.965865       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I1209 23:49:45.477558       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I1209 23:49:45.541225       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I1209 23:49:45.680279       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I1209 23:49:46.024672       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	W1209 23:49:46.117120       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	I1209 23:49:46.255738       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I1209 23:49:46.268986       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I1209 23:49:46.320118       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1209 23:49:47.022450       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W1209 23:49:47.022655       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W1209 23:49:47.087507       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W1209 23:49:47.115467       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W1209 23:49:47.320474       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W1209 23:49:47.625163       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	E1209 23:50:06.646248       1 conn.go:339] Error on socket receive: read tcp 192.168.39.28:8443->192.168.39.1:46356: use of closed network connection
	E1209 23:50:06.851068       1 conn.go:339] Error on socket receive: read tcp 192.168.39.28:8443->192.168.39.1:46388: use of closed network connection
	I1209 23:50:16.953301       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.42.121"}
	I1209 23:50:33.410121       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [e79423ec06037375745a89c4cdbe1ff4160e647333035479cf3960b855d21c5e] <==
	E1209 23:50:25.933142       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:50:26.331877       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:50:26.331937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:50:26.593946       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:50:26.594065       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1209 23:50:28.895228       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-dc5db94f4" duration="7.198µs"
	I1209 23:50:35.383829       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-722117"
	I1209 23:50:35.565629       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="9.398µs"
	I1209 23:50:45.667532       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	W1209 23:50:48.461913       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:50:48.462200       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:50:49.274177       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:50:49.274494       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:50:50.389596       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:50:50.389634       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:50:57.031541       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:50:57.031625       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:51:01.285064       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:51:01.285423       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:51:08.913450       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:51:08.913489       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:51:08.999547       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:51:08.999597       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:51:23.794831       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:51:23.795422       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [7d1d26993435e5c36a8980e8615b90263e9b18efc8ca19330277bed52b4598ec] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1209 23:45:33.626805       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1209 23:45:33.646689       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.28"]
	E1209 23:45:33.648016       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 23:45:33.779765       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1209 23:45:33.779813       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 23:45:33.779847       1 server_linux.go:169] "Using iptables Proxier"
	I1209 23:45:33.790934       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 23:45:33.791274       1 server.go:483] "Version info" version="v1.31.2"
	I1209 23:45:33.791304       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 23:45:33.792685       1 config.go:199] "Starting service config controller"
	I1209 23:45:33.792778       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1209 23:45:33.792829       1 config.go:105] "Starting endpoint slice config controller"
	I1209 23:45:33.792833       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1209 23:45:33.793251       1 config.go:328] "Starting node config controller"
	I1209 23:45:33.793286       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1209 23:45:33.894394       1 shared_informer.go:320] Caches are synced for node config
	I1209 23:45:33.894468       1 shared_informer.go:320] Caches are synced for service config
	I1209 23:45:33.894497       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [461995181667010c43ef61a32c309742d8a65af1c07661e876b88a7c3b992625] <==
	E1209 23:45:24.982101       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 23:45:24.979161       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1209 23:45:24.982143       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1209 23:45:24.979230       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 23:45:25.835465       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1209 23:45:25.835523       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1209 23:45:25.872639       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1209 23:45:25.873187       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 23:45:25.888026       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1209 23:45:25.888330       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 23:45:25.972607       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1209 23:45:25.972823       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 23:45:25.984018       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1209 23:45:25.984070       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 23:45:26.116508       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1209 23:45:26.117145       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1209 23:45:26.157142       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1209 23:45:26.157198       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 23:45:26.166032       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1209 23:45:26.166090       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 23:45:26.271510       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1209 23:45:26.271565       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 23:45:26.361025       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1209 23:45:26.361077       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1209 23:45:28.964172       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 09 23:50:29 addons-722117 kubelet[1180]: I1209 23:50:29.433577    1180 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-p8zv4\" (UniqueName: \"kubernetes.io/projected/57044f92-0e4e-41cd-bfb0-aac97d76092f-kube-api-access-p8zv4\") on node \"addons-722117\" DevicePath \"\""
	Dec 09 23:50:29 addons-722117 kubelet[1180]: I1209 23:50:29.617466    1180 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57044f92-0e4e-41cd-bfb0-aac97d76092f" path="/var/lib/kubelet/pods/57044f92-0e4e-41cd-bfb0-aac97d76092f/volumes"
	Dec 09 23:50:33 addons-722117 kubelet[1180]: I1209 23:50:33.614645    1180 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-5cc95cd69-89n6d" secret="" err="secret \"gcp-auth\" not found"
	Dec 09 23:50:35 addons-722117 kubelet[1180]: I1209 23:50:35.986786    1180 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85hcv\" (UniqueName: \"kubernetes.io/projected/8208dafd-c136-4474-80b4-e858699088aa-kube-api-access-85hcv\") pod \"8208dafd-c136-4474-80b4-e858699088aa\" (UID: \"8208dafd-c136-4474-80b4-e858699088aa\") "
	Dec 09 23:50:35 addons-722117 kubelet[1180]: I1209 23:50:35.993218    1180 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8208dafd-c136-4474-80b4-e858699088aa-kube-api-access-85hcv" (OuterVolumeSpecName: "kube-api-access-85hcv") pod "8208dafd-c136-4474-80b4-e858699088aa" (UID: "8208dafd-c136-4474-80b4-e858699088aa"). InnerVolumeSpecName "kube-api-access-85hcv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 09 23:50:36 addons-722117 kubelet[1180]: I1209 23:50:36.087928    1180 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-85hcv\" (UniqueName: \"kubernetes.io/projected/8208dafd-c136-4474-80b4-e858699088aa-kube-api-access-85hcv\") on node \"addons-722117\" DevicePath \"\""
	Dec 09 23:50:36 addons-722117 kubelet[1180]: I1209 23:50:36.382908    1180 scope.go:117] "RemoveContainer" containerID="7e31a1fd68928beead31b6fa411810dc58bd91a05e973212ca53f5ec013ab69e"
	Dec 09 23:50:36 addons-722117 kubelet[1180]: I1209 23:50:36.397514    1180 scope.go:117] "RemoveContainer" containerID="7e31a1fd68928beead31b6fa411810dc58bd91a05e973212ca53f5ec013ab69e"
	Dec 09 23:50:36 addons-722117 kubelet[1180]: E1209 23:50:36.398824    1180 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7e31a1fd68928beead31b6fa411810dc58bd91a05e973212ca53f5ec013ab69e\": not found" containerID="7e31a1fd68928beead31b6fa411810dc58bd91a05e973212ca53f5ec013ab69e"
	Dec 09 23:50:36 addons-722117 kubelet[1180]: I1209 23:50:36.398912    1180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7e31a1fd68928beead31b6fa411810dc58bd91a05e973212ca53f5ec013ab69e"} err="failed to get container status \"7e31a1fd68928beead31b6fa411810dc58bd91a05e973212ca53f5ec013ab69e\": rpc error: code = NotFound desc = an error occurred when try to find container \"7e31a1fd68928beead31b6fa411810dc58bd91a05e973212ca53f5ec013ab69e\": not found"
	Dec 09 23:50:37 addons-722117 kubelet[1180]: I1209 23:50:37.617168    1180 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8208dafd-c136-4474-80b4-e858699088aa" path="/var/lib/kubelet/pods/8208dafd-c136-4474-80b4-e858699088aa/volumes"
	Dec 09 23:50:47 addons-722117 kubelet[1180]: E1209 23:50:47.062281    1180 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="57044f92-0e4e-41cd-bfb0-aac97d76092f" containerName="cloud-spanner-emulator"
	Dec 09 23:50:47 addons-722117 kubelet[1180]: E1209 23:50:47.062344    1180 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8208dafd-c136-4474-80b4-e858699088aa" containerName="yakd"
	Dec 09 23:50:47 addons-722117 kubelet[1180]: I1209 23:50:47.062403    1180 memory_manager.go:354] "RemoveStaleState removing state" podUID="57044f92-0e4e-41cd-bfb0-aac97d76092f" containerName="cloud-spanner-emulator"
	Dec 09 23:50:47 addons-722117 kubelet[1180]: I1209 23:50:47.062415    1180 memory_manager.go:354] "RemoveStaleState removing state" podUID="8208dafd-c136-4474-80b4-e858699088aa" containerName="yakd"
	Dec 09 23:50:47 addons-722117 kubelet[1180]: I1209 23:50:47.179188    1180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3a83b1b8-83d8-4965-a97b-805dd336f27a\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^68fb8bf1-b688-11ef-bf29-e69b65fcd388\") pod \"task-pv-pod\" (UID: \"fed7ec4a-1bd8-4a30-9b5f-6197ef725f24\") " pod="default/task-pv-pod"
	Dec 09 23:50:47 addons-722117 kubelet[1180]: I1209 23:50:47.179346    1180 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vn24\" (UniqueName: \"kubernetes.io/projected/fed7ec4a-1bd8-4a30-9b5f-6197ef725f24-kube-api-access-9vn24\") pod \"task-pv-pod\" (UID: \"fed7ec4a-1bd8-4a30-9b5f-6197ef725f24\") " pod="default/task-pv-pod"
	Dec 09 23:50:47 addons-722117 kubelet[1180]: I1209 23:50:47.299212    1180 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3a83b1b8-83d8-4965-a97b-805dd336f27a\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^68fb8bf1-b688-11ef-bf29-e69b65fcd388\") pod \"task-pv-pod\" (UID: \"fed7ec4a-1bd8-4a30-9b5f-6197ef725f24\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/2cef3c7ec7fac32ea3141f580283ac4db35f442274e1c1de2513a12ed84453e5/globalmount\"" pod="default/task-pv-pod"
	Dec 09 23:50:55 addons-722117 kubelet[1180]: I1209 23:50:55.614489    1180 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-j9jp7" secret="" err="secret \"gcp-auth\" not found"
	Dec 09 23:51:17 addons-722117 kubelet[1180]: I1209 23:51:17.616957    1180 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 09 23:51:27 addons-722117 kubelet[1180]: E1209 23:51:27.631126    1180 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 09 23:51:27 addons-722117 kubelet[1180]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 09 23:51:27 addons-722117 kubelet[1180]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 09 23:51:27 addons-722117 kubelet[1180]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 09 23:51:27 addons-722117 kubelet[1180]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [a35a729e68cffc7062444e8e615b4a04bf7b0bf68c4508f106e93d820acef165] <==
	I1209 23:45:40.388613       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 23:45:40.403434       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 23:45:40.403487       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1209 23:45:40.419077       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 23:45:40.419398       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-722117_67dbf7b0-5d79-490c-96c0-aeb9828d43c7!
	I1209 23:45:40.420850       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9ac8a4b4-9627-4bdf-a35d-896d3e65959d", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-722117_67dbf7b0-5d79-490c-96c0-aeb9828d43c7 became leader
	I1209 23:45:40.519824       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-722117_67dbf7b0-5d79-490c-96c0-aeb9828d43c7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-722117 -n addons-722117
helpers_test.go:261: (dbg) Run:  kubectl --context addons-722117 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: registry-test task-pv-pod test-local-path headlamp-cd8ffd6fc-tq6vq ingress-nginx-admission-create-gjcwh ingress-nginx-admission-patch-7c6qq helper-pod-create-pvc-0ac36911-bd40-4f0e-adc0-0f4ef8b32d5b
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-722117 describe pod registry-test task-pv-pod test-local-path headlamp-cd8ffd6fc-tq6vq ingress-nginx-admission-create-gjcwh ingress-nginx-admission-patch-7c6qq helper-pod-create-pvc-0ac36911-bd40-4f0e-adc0-0f4ef8b32d5b
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-722117 describe pod registry-test task-pv-pod test-local-path headlamp-cd8ffd6fc-tq6vq ingress-nginx-admission-create-gjcwh ingress-nginx-admission-patch-7c6qq helper-pod-create-pvc-0ac36911-bd40-4f0e-adc0-0f4ef8b32d5b: exit status 1 (95.500316ms)

                                                
                                                
-- stdout --
	Name:                      registry-test
	Namespace:                 default
	Priority:                  0
	Service Account:           default
	Node:                      addons-722117/192.168.39.28
	Start Time:                Mon, 09 Dec 2024 23:50:28 +0000
	Labels:                    run=registry-test
	Annotations:               <none>
	Status:                    Terminating (lasts <invalid>)
	Termination Grace Period:  30s
	IP:                        
	IPs:                       <none>
	Containers:
	  registry-test:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Args:
	      sh
	      -c
	      wget --spider -S http://registry.kube-system.svc.cluster.local
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d8kg9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-d8kg9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  62s   default-scheduler  Successfully assigned default/registry-test to addons-722117
	  Normal  Pulling    62s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-722117/192.168.39.28
	Start Time:       Mon, 09 Dec 2024 23:50:47 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9vn24 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-9vn24:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  43s   default-scheduler  Successfully assigned default/task-pv-pod to addons-722117
	  Normal  Pulling    43s   kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bz9lg (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-bz9lg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "headlamp-cd8ffd6fc-tq6vq" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-gjcwh" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-7c6qq" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-0ac36911-bd40-4f0e-adc0-0f4ef8b32d5b" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-722117 describe pod registry-test task-pv-pod test-local-path headlamp-cd8ffd6fc-tq6vq ingress-nginx-admission-create-gjcwh ingress-nginx-admission-patch-7c6qq helper-pod-create-pvc-0ac36911-bd40-4f0e-adc0-0f4ef8b32d5b: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-722117 addons disable registry --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/Registry (75.27s)

                                                
                                    

Test pass (289/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 33.78
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.15
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.31.2/json-events 15.41
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.07
18 TestDownloadOnly/v1.31.2/DeleteAll 0.14
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.61
22 TestOffline 86.14
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 266.06
29 TestAddons/serial/Volcano 44.61
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 11.53
36 TestAddons/parallel/Ingress 64.58
37 TestAddons/parallel/InspektorGadget 11.72
38 TestAddons/parallel/MetricsServer 7.2
40 TestAddons/parallel/CSI 137.53
41 TestAddons/parallel/Headlamp 135.71
42 TestAddons/parallel/CloudSpanner 5.72
43 TestAddons/parallel/LocalPath 190.11
44 TestAddons/parallel/NvidiaDevicePlugin 6.59
45 TestAddons/parallel/Yakd 11.79
47 TestAddons/StoppedEnableDisable 92.71
48 TestCertOptions 89.99
49 TestCertExpiration 322.93
51 TestForceSystemdFlag 83.95
52 TestForceSystemdEnv 62.09
54 TestKVMDriverInstallOrUpdate 7.91
58 TestErrorSpam/setup 44.17
59 TestErrorSpam/start 0.39
60 TestErrorSpam/status 0.77
61 TestErrorSpam/pause 1.66
62 TestErrorSpam/unpause 1.81
63 TestErrorSpam/stop 4.86
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 86.94
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 44.5
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.3
75 TestFunctional/serial/CacheCmd/cache/add_local 2.71
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.67
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 38.32
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.52
86 TestFunctional/serial/LogsFileCmd 1.41
87 TestFunctional/serial/InvalidService 4.66
89 TestFunctional/parallel/ConfigCmd 0.39
90 TestFunctional/parallel/DashboardCmd 20.95
91 TestFunctional/parallel/DryRun 0.3
92 TestFunctional/parallel/InternationalLanguage 0.16
93 TestFunctional/parallel/StatusCmd 0.9
97 TestFunctional/parallel/ServiceCmdConnect 21.57
98 TestFunctional/parallel/AddonsCmd 0.16
99 TestFunctional/parallel/PersistentVolumeClaim 47.91
101 TestFunctional/parallel/SSHCmd 0.46
102 TestFunctional/parallel/CpCmd 1.38
103 TestFunctional/parallel/MySQL 25.37
104 TestFunctional/parallel/FileSync 0.25
105 TestFunctional/parallel/CertSync 1.51
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.43
113 TestFunctional/parallel/License 0.55
123 TestFunctional/parallel/ServiceCmd/DeployApp 21.21
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.48
125 TestFunctional/parallel/ServiceCmd/List 0.5
126 TestFunctional/parallel/ProfileCmd/profile_list 0.35
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.47
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.33
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
130 TestFunctional/parallel/MountCmd/any-port 9.46
131 TestFunctional/parallel/ServiceCmd/Format 0.31
132 TestFunctional/parallel/ServiceCmd/URL 0.31
133 TestFunctional/parallel/Version/short 0.05
134 TestFunctional/parallel/Version/components 0.67
135 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
136 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
137 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
138 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
139 TestFunctional/parallel/ImageCommands/ImageBuild 5.93
140 TestFunctional/parallel/ImageCommands/Setup 2.51
141 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.49
142 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.21
143 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.43
144 TestFunctional/parallel/MountCmd/specific-port 2.04
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.45
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.61
147 TestFunctional/parallel/MountCmd/VerifyCleanup 1.39
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.02
149 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
150 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
151 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.61
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 198.24
160 TestMultiControlPlane/serial/DeployApp 8.07
161 TestMultiControlPlane/serial/PingHostFromPods 1.27
162 TestMultiControlPlane/serial/AddWorkerNode 59.62
163 TestMultiControlPlane/serial/NodeLabels 0.07
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.9
165 TestMultiControlPlane/serial/CopyFile 13.51
166 TestMultiControlPlane/serial/StopSecondaryNode 92.43
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.68
168 TestMultiControlPlane/serial/RestartSecondaryNode 40.67
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.87
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 477.48
171 TestMultiControlPlane/serial/DeleteSecondaryNode 7.08
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.64
173 TestMultiControlPlane/serial/StopCluster 274.74
174 TestMultiControlPlane/serial/RestartCluster 150.12
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.65
176 TestMultiControlPlane/serial/AddSecondaryNode 79.72
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.88
181 TestJSONOutput/start/Command 82.13
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.7
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.63
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 6.62
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.21
209 TestMainNoArgs 0.05
210 TestMinikubeProfile 91.33
213 TestMountStart/serial/StartWithMountFirst 30.29
214 TestMountStart/serial/VerifyMountFirst 0.38
215 TestMountStart/serial/StartWithMountSecond 31.3
216 TestMountStart/serial/VerifyMountSecond 0.39
217 TestMountStart/serial/DeleteFirst 0.68
218 TestMountStart/serial/VerifyMountPostDelete 0.49
219 TestMountStart/serial/Stop 1.57
220 TestMountStart/serial/RestartStopped 23.59
221 TestMountStart/serial/VerifyMountPostStop 0.39
224 TestMultiNode/serial/FreshStart2Nodes 118.78
225 TestMultiNode/serial/DeployApp2Nodes 6.97
226 TestMultiNode/serial/PingHostFrom2Pods 0.83
227 TestMultiNode/serial/AddNode 52.73
228 TestMultiNode/serial/MultiNodeLabels 0.06
229 TestMultiNode/serial/ProfileList 0.58
230 TestMultiNode/serial/CopyFile 7.38
231 TestMultiNode/serial/StopNode 2.24
232 TestMultiNode/serial/StartAfterStop 36.36
233 TestMultiNode/serial/RestartKeepsNodes 317.97
234 TestMultiNode/serial/DeleteNode 2.21
235 TestMultiNode/serial/StopMultiNode 183.25
236 TestMultiNode/serial/RestartMultiNode 105.56
237 TestMultiNode/serial/ValidateNameConflict 48.31
242 TestPreload 201.51
244 TestScheduledStopUnix 114.42
248 TestRunningBinaryUpgrade 192.34
250 TestKubernetesUpgrade 179.2
253 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
254 TestNoKubernetes/serial/StartWithK8s 99.54
255 TestStoppedBinaryUpgrade/Setup 3.2
256 TestStoppedBinaryUpgrade/Upgrade 174.02
257 TestNoKubernetes/serial/StartWithStopK8s 67.27
258 TestNoKubernetes/serial/Start 29.42
266 TestNetworkPlugins/group/false 3.94
267 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
268 TestNoKubernetes/serial/ProfileList 0.69
269 TestNoKubernetes/serial/Stop 2.33
273 TestNoKubernetes/serial/StartNoArgs 59.82
281 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
282 TestStoppedBinaryUpgrade/MinikubeLogs 0.93
284 TestPause/serial/Start 146.27
285 TestNetworkPlugins/group/auto/Start 90.7
286 TestNetworkPlugins/group/kindnet/Start 95.77
287 TestPause/serial/SecondStartNoReconfiguration 39.91
288 TestNetworkPlugins/group/auto/KubeletFlags 0.24
289 TestNetworkPlugins/group/auto/NetCatPod 10.29
290 TestNetworkPlugins/group/auto/DNS 0.16
291 TestNetworkPlugins/group/auto/Localhost 0.13
292 TestNetworkPlugins/group/auto/HairPin 0.12
293 TestNetworkPlugins/group/calico/Start 87.42
294 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
295 TestPause/serial/Pause 0.82
296 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
297 TestNetworkPlugins/group/kindnet/NetCatPod 10.28
298 TestPause/serial/VerifyStatus 0.32
299 TestPause/serial/Unpause 0.74
300 TestPause/serial/PauseAgain 0.96
301 TestPause/serial/DeletePaused 0.85
302 TestPause/serial/VerifyDeletedResources 0.6
303 TestNetworkPlugins/group/custom-flannel/Start 91.09
304 TestNetworkPlugins/group/kindnet/DNS 0.17
305 TestNetworkPlugins/group/kindnet/Localhost 0.13
306 TestNetworkPlugins/group/kindnet/HairPin 0.12
307 TestNetworkPlugins/group/enable-default-cni/Start 106.6
308 TestNetworkPlugins/group/calico/ControllerPod 6.01
309 TestNetworkPlugins/group/flannel/Start 83.22
310 TestNetworkPlugins/group/calico/KubeletFlags 0.23
311 TestNetworkPlugins/group/calico/NetCatPod 10.27
312 TestNetworkPlugins/group/calico/DNS 0.19
313 TestNetworkPlugins/group/calico/Localhost 0.16
314 TestNetworkPlugins/group/calico/HairPin 0.15
315 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
316 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.25
317 TestNetworkPlugins/group/custom-flannel/DNS 0.18
318 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
319 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
320 TestNetworkPlugins/group/bridge/Start 88.81
322 TestStartStop/group/old-k8s-version/serial/FirstStart 195.38
323 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.21
324 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.26
325 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
326 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
327 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
329 TestStartStop/group/no-preload/serial/FirstStart 108.11
330 TestNetworkPlugins/group/flannel/ControllerPod 6.22
331 TestNetworkPlugins/group/flannel/KubeletFlags 0.39
332 TestNetworkPlugins/group/flannel/NetCatPod 11.05
333 TestNetworkPlugins/group/flannel/DNS 0.15
334 TestNetworkPlugins/group/flannel/Localhost 0.12
335 TestNetworkPlugins/group/flannel/HairPin 0.14
336 TestNetworkPlugins/group/bridge/KubeletFlags 0.25
338 TestStartStop/group/embed-certs/serial/FirstStart 90.22
339 TestNetworkPlugins/group/bridge/NetCatPod 10.31
340 TestNetworkPlugins/group/bridge/DNS 0.22
341 TestNetworkPlugins/group/bridge/Localhost 0.18
342 TestNetworkPlugins/group/bridge/HairPin 0.19
344 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 91.68
345 TestStartStop/group/no-preload/serial/DeployApp 11.31
346 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.13
347 TestStartStop/group/no-preload/serial/Stop 91.74
348 TestStartStop/group/embed-certs/serial/DeployApp 11.29
349 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.02
350 TestStartStop/group/embed-certs/serial/Stop 92.46
351 TestStartStop/group/old-k8s-version/serial/DeployApp 12.44
352 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.27
353 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.04
354 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.06
355 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.79
356 TestStartStop/group/old-k8s-version/serial/Stop 92.49
357 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
358 TestStartStop/group/no-preload/serial/SecondStart 313.51
359 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
360 TestStartStop/group/embed-certs/serial/SecondStart 298.26
361 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
362 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 294.17
363 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
364 TestStartStop/group/old-k8s-version/serial/SecondStart 186.76
365 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
366 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
367 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
368 TestStartStop/group/old-k8s-version/serial/Pause 2.67
370 TestStartStop/group/newest-cni/serial/FirstStart 52.22
371 TestStartStop/group/newest-cni/serial/DeployApp 0
372 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.98
373 TestStartStop/group/newest-cni/serial/Stop 2.33
374 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
375 TestStartStop/group/newest-cni/serial/SecondStart 77.64
376 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 7.01
377 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
378 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
379 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
380 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
381 TestStartStop/group/no-preload/serial/Pause 2.72
382 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
383 TestStartStop/group/embed-certs/serial/Pause 2.91
384 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
385 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
386 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
387 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.62
388 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
389 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
390 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
391 TestStartStop/group/newest-cni/serial/Pause 2.35
x
+
TestDownloadOnly/v1.20.0/json-events (33.78s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-443803 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-443803 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (33.780568986s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (33.78s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1209 23:44:27.523509  316833 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I1209 23:44:27.523625  316833 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-309592/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-443803
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-443803: exit status 85 (68.516849ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-443803 | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC |          |
	|         | -p download-only-443803        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 23:43:53
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 23:43:53.786188  316845 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:43:53.786304  316845 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:43:53.786314  316845 out.go:358] Setting ErrFile to fd 2...
	I1209 23:43:53.786319  316845 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:43:53.786508  316845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-309592/.minikube/bin
	W1209 23:43:53.786650  316845 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20062-309592/.minikube/config/config.json: open /home/jenkins/minikube-integration/20062-309592/.minikube/config/config.json: no such file or directory
	I1209 23:43:53.787284  316845 out.go:352] Setting JSON to true
	I1209 23:43:53.788312  316845 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":26756,"bootTime":1733761078,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 23:43:53.788425  316845 start.go:139] virtualization: kvm guest
	I1209 23:43:53.790771  316845 out.go:97] [download-only-443803] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 23:43:53.790975  316845 notify.go:220] Checking for updates...
	W1209 23:43:53.790977  316845 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20062-309592/.minikube/cache/preloaded-tarball: no such file or directory
	I1209 23:43:53.792496  316845 out.go:169] MINIKUBE_LOCATION=20062
	I1209 23:43:53.793987  316845 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:43:53.795410  316845 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20062-309592/kubeconfig
	I1209 23:43:53.796892  316845 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-309592/.minikube
	I1209 23:43:53.798299  316845 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1209 23:43:53.800946  316845 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1209 23:43:53.801321  316845 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:43:53.836907  316845 out.go:97] Using the kvm2 driver based on user configuration
	I1209 23:43:53.836933  316845 start.go:297] selected driver: kvm2
	I1209 23:43:53.836940  316845 start.go:901] validating driver "kvm2" against <nil>
	I1209 23:43:53.837294  316845 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:43:53.837388  316845 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20062-309592/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 23:43:53.853311  316845 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 23:43:53.853358  316845 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 23:43:53.853898  316845 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1209 23:43:53.854039  316845 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 23:43:53.854078  316845 cni.go:84] Creating CNI manager for ""
	I1209 23:43:53.854133  316845 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1209 23:43:53.854142  316845 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 23:43:53.854210  316845 start.go:340] cluster config:
	{Name:download-only-443803 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-443803 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:43:53.854398  316845 iso.go:125] acquiring lock: {Name:mk653a727a207899371d18f50d4ce9d11018138a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:43:53.856215  316845 out.go:97] Downloading VM boot image ...
	I1209 23:43:53.856257  316845 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20062-309592/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 23:44:06.938841  316845 out.go:97] Starting "download-only-443803" primary control-plane node in "download-only-443803" cluster
	I1209 23:44:06.938886  316845 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1209 23:44:07.093858  316845 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I1209 23:44:07.093910  316845 cache.go:56] Caching tarball of preloaded images
	I1209 23:44:07.094123  316845 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1209 23:44:07.096205  316845 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1209 23:44:07.096602  316845 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I1209 23:44:07.256818  316845 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/20062-309592/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-443803 host does not exist
	  To start a cluster, run: "minikube start -p download-only-443803"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-443803
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (15.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-000195 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-000195 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (15.413979699s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (15.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1209 23:44:43.294498  316833 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
I1209 23:44:43.294535  316833 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-309592/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-000195
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-000195: exit status 85 (65.974834ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-443803 | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC |                     |
	|         | -p download-only-443803        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 09 Dec 24 23:44 UTC | 09 Dec 24 23:44 UTC |
	| delete  | -p download-only-443803        | download-only-443803 | jenkins | v1.34.0 | 09 Dec 24 23:44 UTC | 09 Dec 24 23:44 UTC |
	| start   | -o=json --download-only        | download-only-000195 | jenkins | v1.34.0 | 09 Dec 24 23:44 UTC |                     |
	|         | -p download-only-000195        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 23:44:27
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 23:44:27.922394  317117 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:44:27.922681  317117 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:44:27.922692  317117 out.go:358] Setting ErrFile to fd 2...
	I1209 23:44:27.922696  317117 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:44:27.922871  317117 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-309592/.minikube/bin
	I1209 23:44:27.923470  317117 out.go:352] Setting JSON to true
	I1209 23:44:27.924331  317117 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":26790,"bootTime":1733761078,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 23:44:27.924434  317117 start.go:139] virtualization: kvm guest
	I1209 23:44:27.926364  317117 out.go:97] [download-only-000195] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 23:44:27.926505  317117 notify.go:220] Checking for updates...
	I1209 23:44:27.927836  317117 out.go:169] MINIKUBE_LOCATION=20062
	I1209 23:44:27.929193  317117 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:44:27.930676  317117 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20062-309592/kubeconfig
	I1209 23:44:27.932119  317117 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-309592/.minikube
	I1209 23:44:27.933473  317117 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1209 23:44:27.935876  317117 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1209 23:44:27.936145  317117 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:44:27.968592  317117 out.go:97] Using the kvm2 driver based on user configuration
	I1209 23:44:27.968623  317117 start.go:297] selected driver: kvm2
	I1209 23:44:27.968633  317117 start.go:901] validating driver "kvm2" against <nil>
	I1209 23:44:27.969073  317117 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:44:27.969176  317117 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20062-309592/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 23:44:27.984677  317117 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 23:44:27.984746  317117 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 23:44:27.985265  317117 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1209 23:44:27.985398  317117 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 23:44:27.985430  317117 cni.go:84] Creating CNI manager for ""
	I1209 23:44:27.985480  317117 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1209 23:44:27.985489  317117 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 23:44:27.985542  317117 start.go:340] cluster config:
	{Name:download-only-000195 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-000195 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:44:27.985641  317117 iso.go:125] acquiring lock: {Name:mk653a727a207899371d18f50d4ce9d11018138a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:44:27.987284  317117 out.go:97] Starting "download-only-000195" primary control-plane node in "download-only-000195" cluster
	I1209 23:44:27.987305  317117 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
	I1209 23:44:28.216444  317117 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-amd64.tar.lz4
	I1209 23:44:28.216475  317117 cache.go:56] Caching tarball of preloaded images
	I1209 23:44:28.216631  317117 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
	I1209 23:44:28.218451  317117 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1209 23:44:28.218474  317117 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-amd64.tar.lz4 ...
	I1209 23:44:28.374829  317117 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:823d7cacd71c9363eaa034fc8738176b -> /home/jenkins/minikube-integration/20062-309592/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-amd64.tar.lz4
	I1209 23:44:41.120851  317117 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-amd64.tar.lz4 ...
	I1209 23:44:41.120975  317117 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20062-309592/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-amd64.tar.lz4 ...
	I1209 23:44:41.864933  317117 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on containerd
	I1209 23:44:41.865344  317117 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/download-only-000195/config.json ...
	I1209 23:44:41.865391  317117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/download-only-000195/config.json: {Name:mk174f16816de51cbf4224775a68f985a5257c74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:41.865602  317117 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
	I1209 23:44:41.865774  317117 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/20062-309592/.minikube/cache/linux/amd64/v1.31.2/kubectl
	
	
	* The control-plane node download-only-000195 host does not exist
	  To start a cluster, run: "minikube start -p download-only-000195"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-000195
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I1209 23:44:43.893743  316833 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-125941 --alsologtostderr --binary-mirror http://127.0.0.1:33351 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-125941" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-125941
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestOffline (86.14s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-724361 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-724361 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m25.062486413s)
helpers_test.go:175: Cleaning up "offline-containerd-724361" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-724361
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-724361: (1.074143983s)
--- PASS: TestOffline (86.14s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-722117
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-722117: exit status 85 (53.511168ms)

                                                
                                                
-- stdout --
	* Profile "addons-722117" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-722117"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-722117
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-722117: exit status 85 (54.357397ms)

                                                
                                                
-- stdout --
	* Profile "addons-722117" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-722117"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (266.06s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-722117 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-722117 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (4m26.061650038s)
--- PASS: TestAddons/Setup (266.06s)

                                                
                                    
x
+
TestAddons/serial/Volcano (44.61s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:823: volcano-controller stabilized in 16.288625ms
addons_test.go:815: volcano-admission stabilized in 16.286856ms
addons_test.go:807: volcano-scheduler stabilized in 16.453604ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-bhnnk" [b21404d7-c9a4-4c0e-808f-146273c44d50] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004400384s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-24m8j" [7d7b456d-36fc-4e80-ad42-ffbc0dab159b] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.005493515s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-jp5cq" [e440491a-0586-4d84-82bd-17480bd542fd] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003206136s
addons_test.go:842: (dbg) Run:  kubectl --context addons-722117 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-722117 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-722117 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [b00b19b1-9057-4f95-b793-174a9e019506] Pending
helpers_test.go:344: "test-job-nginx-0" [b00b19b1-9057-4f95-b793-174a9e019506] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [b00b19b1-9057-4f95-b793-174a9e019506] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 17.004791869s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-722117 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-722117 addons disable volcano --alsologtostderr -v=1: (11.224937562s)
--- PASS: TestAddons/serial/Volcano (44.61s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-722117 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-722117 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.53s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-722117 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-722117 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [80d99464-bceb-4cd6-b944-fe629c132ebd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [80d99464-bceb-4cd6-b944-fe629c132ebd] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.004308365s
addons_test.go:633: (dbg) Run:  kubectl --context addons-722117 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-722117 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-722117 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.53s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (64.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-722117 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-722117 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-722117 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [28edcc49-165c-48e1-8e7f-60834a80b94f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [28edcc49-165c-48e1-8e7f-60834a80b94f] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 54.004178651s
I1209 23:52:37.571107  316833 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-722117 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-722117 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-722117 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.28
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-722117 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-722117 addons disable ingress-dns --alsologtostderr -v=1: (1.462759622s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-722117 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-722117 addons disable ingress --alsologtostderr -v=1: (7.734871158s)
--- PASS: TestAddons/parallel/Ingress (64.58s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.72s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-x8nnc" [386139b0-e9c1-4e82-88ac-55c2a993100a] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005020723s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-722117 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-722117 addons disable inspektor-gadget --alsologtostderr -v=1: (5.713280151s)
--- PASS: TestAddons/parallel/InspektorGadget (11.72s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.2s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 4.5024ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-6xkj4" [c1cd6238-412b-4017-974f-9c361334dfc5] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.011620621s
addons_test.go:402: (dbg) Run:  kubectl --context addons-722117 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-722117 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-722117 addons disable metrics-server --alsologtostderr -v=1: (1.106133472s)
--- PASS: TestAddons/parallel/MetricsServer (7.20s)

                                                
                                    
x
+
TestAddons/parallel/CSI (137.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1209 23:50:40.782735  316833 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1209 23:50:40.791125  316833 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1209 23:50:40.791156  316833 kapi.go:107] duration metric: took 8.429495ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 8.44107ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-722117 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-722117 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [fed7ec4a-1bd8-4a30-9b5f-6197ef725f24] Pending
helpers_test.go:344: "task-pv-pod" [fed7ec4a-1bd8-4a30-9b5f-6197ef725f24] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [fed7ec4a-1bd8-4a30-9b5f-6197ef725f24] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 1m46.005150343s
addons_test.go:511: (dbg) Run:  kubectl --context addons-722117 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-722117 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-722117 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-722117 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-722117 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-722117 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-722117 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [02e2cc72-72c7-437e-883f-33d4cb8fa704] Pending
helpers_test.go:344: "task-pv-pod-restore" [02e2cc72-72c7-437e-883f-33d4cb8fa704] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [02e2cc72-72c7-437e-883f-33d4cb8fa704] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 11.005034426s
addons_test.go:553: (dbg) Run:  kubectl --context addons-722117 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-722117 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-722117 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-722117 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-722117 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-722117 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.932131208s)
--- PASS: TestAddons/parallel/CSI (137.53s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (135.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-722117 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-cd8ffd6fc-tq6vq" [003b72a6-ca81-440a-a4d3-6dcd393bad4c] Pending
helpers_test.go:344: "headlamp-cd8ffd6fc-tq6vq" [003b72a6-ca81-440a-a4d3-6dcd393bad4c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-cd8ffd6fc-tq6vq" [003b72a6-ca81-440a-a4d3-6dcd393bad4c] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 2m9.005476913s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-722117 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-722117 addons disable headlamp --alsologtostderr -v=1: (5.725909331s)
--- PASS: TestAddons/parallel/Headlamp (135.71s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.72s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-dc5db94f4-txdfm" [57044f92-0e4e-41cd-bfb0-aac97d76092f] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004974412s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-722117 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.72s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (190.11s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-722117 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-722117 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [158a0c17-237f-43f6-b6e8-401be16d5111] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [158a0c17-237f-43f6-b6e8-401be16d5111] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [158a0c17-237f-43f6-b6e8-401be16d5111] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.004714635s
addons_test.go:906: (dbg) Run:  kubectl --context addons-722117 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-722117 ssh "cat /opt/local-path-provisioner/pvc-0ac36911-bd40-4f0e-adc0-0f4ef8b32d5b_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-722117 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-722117 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-722117 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-722117 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.207480862s)
--- PASS: TestAddons/parallel/LocalPath (190.11s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-gm9q9" [c1b6b975-b701-4a2f-ae36-440e4446d946] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005955465s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-722117 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.59s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-zsgvz" [8208dafd-c136-4474-80b4-e858699088aa] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.006089456s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-722117 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-722117 addons disable yakd --alsologtostderr -v=1: (5.780900959s)
--- PASS: TestAddons/parallel/Yakd (11.79s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.71s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-722117
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-722117: (1m32.414438806s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-722117
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-722117
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-722117
--- PASS: TestAddons/StoppedEnableDisable (92.71s)

                                                
                                    
x
+
TestCertOptions (89.99s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-726228 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-726228 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m28.521331979s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-726228 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-726228 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-726228 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-726228" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-726228
E1210 00:53:49.744845  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/functional-283319/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestCertOptions (89.99s)

                                                
                                    
x
+
TestCertExpiration (322.93s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-596498 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-596498 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m28.123783807s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-596498 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-596498 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (53.740884003s)
helpers_test.go:175: Cleaning up "cert-expiration-596498" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-596498
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-596498: (1.060508959s)
--- PASS: TestCertExpiration (322.93s)

                                                
                                    
x
+
TestForceSystemdFlag (83.95s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-941273 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-941273 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m22.926842756s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-941273 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-941273" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-941273
--- PASS: TestForceSystemdFlag (83.95s)

                                                
                                    
x
+
TestForceSystemdEnv (62.09s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-203083 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-203083 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m0.597499592s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-203083 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-203083" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-203083
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-203083: (1.23627196s)
--- PASS: TestForceSystemdEnv (62.09s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (7.91s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1210 00:51:22.709535  316833 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1210 00:51:22.709710  316833 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1210 00:51:22.740418  316833 install.go:62] docker-machine-driver-kvm2: exit status 1
W1210 00:51:22.740736  316833 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1210 00:51:22.740797  316833 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4288399158/001/docker-machine-driver-kvm2
I1210 00:51:23.290579  316833 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate4288399158/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5316020 0x5316020 0x5316020 0x5316020 0x5316020 0x5316020 0x5316020] Decompressors:map[bz2:0xc000686528 gz:0xc0006865b0 tar:0xc000686560 tar.bz2:0xc000686570 tar.gz:0xc000686580 tar.xz:0xc000686590 tar.zst:0xc0006865a0 tbz2:0xc000686570 tgz:0xc000686580 txz:0xc000686590 tzst:0xc0006865a0 xz:0xc0006865b8 zip:0xc0006865c0 zst:0xc0006865e0] Getters:map[file:0xc001991340 http:0xc0007ee730 https:0xc0007ee780] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1210 00:51:23.290632  316833 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4288399158/001/docker-machine-driver-kvm2
I1210 00:51:27.375753  316833 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1210 00:51:27.375845  316833 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1210 00:51:27.404882  316833 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1210 00:51:27.404913  316833 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1210 00:51:27.404980  316833 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1210 00:51:27.405006  316833 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4288399158/002/docker-machine-driver-kvm2
I1210 00:51:27.729499  316833 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate4288399158/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5316020 0x5316020 0x5316020 0x5316020 0x5316020 0x5316020 0x5316020] Decompressors:map[bz2:0xc000686528 gz:0xc0006865b0 tar:0xc000686560 tar.bz2:0xc000686570 tar.gz:0xc000686580 tar.xz:0xc000686590 tar.zst:0xc0006865a0 tbz2:0xc000686570 tgz:0xc000686580 txz:0xc000686590 tzst:0xc0006865a0 xz:0xc0006865b8 zip:0xc0006865c0 zst:0xc0006865e0] Getters:map[file:0xc0013c8a20 http:0xc000767310 https:0xc000767360] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1210 00:51:27.729547  316833 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4288399158/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (7.91s)

                                                
                                    
x
+
TestErrorSpam/setup (44.17s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-677881 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-677881 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-677881 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-677881 --driver=kvm2  --container-runtime=containerd: (44.169322775s)
--- PASS: TestErrorSpam/setup (44.17s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-677881 --log_dir /tmp/nospam-677881 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-677881 --log_dir /tmp/nospam-677881 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-677881 --log_dir /tmp/nospam-677881 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-677881 --log_dir /tmp/nospam-677881 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-677881 --log_dir /tmp/nospam-677881 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-677881 --log_dir /tmp/nospam-677881 status
--- PASS: TestErrorSpam/status (0.77s)

                                                
                                    
x
+
TestErrorSpam/pause (1.66s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-677881 --log_dir /tmp/nospam-677881 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-677881 --log_dir /tmp/nospam-677881 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-677881 --log_dir /tmp/nospam-677881 pause
--- PASS: TestErrorSpam/pause (1.66s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-677881 --log_dir /tmp/nospam-677881 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-677881 --log_dir /tmp/nospam-677881 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-677881 --log_dir /tmp/nospam-677881 unpause
--- PASS: TestErrorSpam/unpause (1.81s)

                                                
                                    
x
+
TestErrorSpam/stop (4.86s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-677881 --log_dir /tmp/nospam-677881 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-677881 --log_dir /tmp/nospam-677881 stop: (2.342585988s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-677881 --log_dir /tmp/nospam-677881 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-677881 --log_dir /tmp/nospam-677881 stop: (1.463788814s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-677881 --log_dir /tmp/nospam-677881 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-677881 --log_dir /tmp/nospam-677881 stop: (1.053329315s)
--- PASS: TestErrorSpam/stop (4.86s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20062-309592/.minikube/files/etc/test/nested/copy/316833/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (86.94s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-283319 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-283319 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m26.943049416s)
--- PASS: TestFunctional/serial/StartWithProxy (86.94s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (44.5s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1209 23:57:27.461118  316833 config.go:182] Loaded profile config "functional-283319": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-283319 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-283319 --alsologtostderr -v=8: (44.501799311s)
functional_test.go:663: soft start took 44.502537697s for "functional-283319" cluster.
I1209 23:58:11.963340  316833 config.go:182] Loaded profile config "functional-283319": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (44.50s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-283319 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-283319 cache add registry.k8s.io/pause:3.1: (1.061590734s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-283319 cache add registry.k8s.io/pause:3.3: (1.157983813s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-283319 cache add registry.k8s.io/pause:latest: (1.084136745s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-283319 /tmp/TestFunctionalserialCacheCmdcacheadd_local2246590518/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 cache add minikube-local-cache-test:functional-283319
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-283319 cache add minikube-local-cache-test:functional-283319: (2.364984436s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 cache delete minikube-local-cache-test:functional-283319
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-283319
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-283319 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (238.905811ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 kubectl -- --context functional-283319 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-283319 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.32s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-283319 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-283319 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.322545815s)
functional_test.go:761: restart took 38.32269502s for "functional-283319" cluster.
I1209 23:58:58.743805  316833 config.go:182] Loaded profile config "functional-283319": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (38.32s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-283319 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-283319 logs: (1.51961173s)
--- PASS: TestFunctional/serial/LogsCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 logs --file /tmp/TestFunctionalserialLogsFileCmd2882385253/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-283319 logs --file /tmp/TestFunctionalserialLogsFileCmd2882385253/001/logs.txt: (1.406237329s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.41s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.66s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-283319 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-283319
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-283319: exit status 115 (293.605048ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.152:32524 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-283319 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-283319 delete -f testdata/invalidsvc.yaml: (1.168700222s)
--- PASS: TestFunctional/serial/InvalidService (4.66s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-283319 config get cpus: exit status 14 (63.939489ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-283319 config get cpus: exit status 14 (62.281423ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (20.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-283319 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-283319 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 326637: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (20.95s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-283319 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-283319 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (155.223985ms)

                                                
                                                
-- stdout --
	* [functional-283319] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20062
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20062-309592/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-309592/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 23:59:31.894200  326435 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:59:31.894338  326435 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:59:31.894347  326435 out.go:358] Setting ErrFile to fd 2...
	I1209 23:59:31.894351  326435 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:59:31.894516  326435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-309592/.minikube/bin
	I1209 23:59:31.895060  326435 out.go:352] Setting JSON to false
	I1209 23:59:31.896001  326435 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":27694,"bootTime":1733761078,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 23:59:31.896115  326435 start.go:139] virtualization: kvm guest
	I1209 23:59:31.898646  326435 out.go:177] * [functional-283319] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 23:59:31.900407  326435 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 23:59:31.900441  326435 notify.go:220] Checking for updates...
	I1209 23:59:31.903183  326435 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:59:31.904463  326435 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-309592/kubeconfig
	I1209 23:59:31.905845  326435 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-309592/.minikube
	I1209 23:59:31.907320  326435 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 23:59:31.908757  326435 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 23:59:31.910785  326435 config.go:182] Loaded profile config "functional-283319": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1209 23:59:31.911408  326435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:59:31.911486  326435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:31.927714  326435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46869
	I1209 23:59:31.928328  326435 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:31.929038  326435 main.go:141] libmachine: Using API Version  1
	I1209 23:59:31.929060  326435 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:31.929467  326435 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:31.929681  326435 main.go:141] libmachine: (functional-283319) Calling .DriverName
	I1209 23:59:31.929927  326435 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:59:31.930386  326435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:59:31.930427  326435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:31.949466  326435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41791
	I1209 23:59:31.950054  326435 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:31.950609  326435 main.go:141] libmachine: Using API Version  1
	I1209 23:59:31.950638  326435 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:31.950943  326435 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:31.951158  326435 main.go:141] libmachine: (functional-283319) Calling .DriverName
	I1209 23:59:31.988318  326435 out.go:177] * Using the kvm2 driver based on existing profile
	I1209 23:59:31.989745  326435 start.go:297] selected driver: kvm2
	I1209 23:59:31.989758  326435 start.go:901] validating driver "kvm2" against &{Name:functional-283319 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-283319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:59:31.989865  326435 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 23:59:31.991907  326435 out.go:201] 
	W1209 23:59:31.993378  326435 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1209 23:59:31.994720  326435 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-283319 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-283319 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-283319 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (156.093305ms)

                                                
                                                
-- stdout --
	* [functional-283319] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20062
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20062-309592/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-309592/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 23:59:31.820460  326423 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:59:31.820579  326423 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:59:31.820591  326423 out.go:358] Setting ErrFile to fd 2...
	I1209 23:59:31.820598  326423 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:59:31.820881  326423 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-309592/.minikube/bin
	I1209 23:59:31.821497  326423 out.go:352] Setting JSON to false
	I1209 23:59:31.822511  326423 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":27694,"bootTime":1733761078,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 23:59:31.822628  326423 start.go:139] virtualization: kvm guest
	I1209 23:59:31.824751  326423 out.go:177] * [functional-283319] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1209 23:59:31.826206  326423 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 23:59:31.826186  326423 notify.go:220] Checking for updates...
	I1209 23:59:31.829118  326423 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:59:31.830463  326423 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-309592/kubeconfig
	I1209 23:59:31.831802  326423 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-309592/.minikube
	I1209 23:59:31.833134  326423 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 23:59:31.834384  326423 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 23:59:31.835961  326423 config.go:182] Loaded profile config "functional-283319": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1209 23:59:31.836462  326423 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:59:31.836533  326423 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:31.853550  326423 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39325
	I1209 23:59:31.854147  326423 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:31.854769  326423 main.go:141] libmachine: Using API Version  1
	I1209 23:59:31.854789  326423 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:31.855237  326423 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:31.855441  326423 main.go:141] libmachine: (functional-283319) Calling .DriverName
	I1209 23:59:31.855689  326423 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:59:31.855973  326423 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1209 23:59:31.856010  326423 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:31.874085  326423 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43159
	I1209 23:59:31.874595  326423 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:31.875180  326423 main.go:141] libmachine: Using API Version  1
	I1209 23:59:31.875208  326423 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:31.875642  326423 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:31.875841  326423 main.go:141] libmachine: (functional-283319) Calling .DriverName
	I1209 23:59:31.915087  326423 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1209 23:59:31.916637  326423 start.go:297] selected driver: kvm2
	I1209 23:59:31.916657  326423 start.go:901] validating driver "kvm2" against &{Name:functional-283319 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-283319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:59:31.916826  326423 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 23:59:31.920000  326423 out.go:201] 
	W1209 23:59:31.921590  326423 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1209 23:59:31.922947  326423 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (21.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-283319 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-283319 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-2jnms" [e393b659-8c17-42d4-aef9-391a419c7ea7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-2jnms" [e393b659-8c17-42d4-aef9-391a419c7ea7] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 21.005985731s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.152:31893
functional_test.go:1675: http://192.168.39.152:31893: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-2jnms

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.152:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.152:31893
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (21.57s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (47.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8edda5dd-9016-43c6-b075-78417958a63b] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003732268s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-283319 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-283319 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-283319 get pvc myclaim -o=json
I1209 23:59:13.114258  316833 retry.go:31] will retry after 2.208031517s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:1178fa5d-3dab-4632-9c08-6ca5b3712a1e ResourceVersion:783 Generation:0 CreationTimestamp:2024-12-09 23:59:13 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001771700 VolumeMode:0xc001771710 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
E1209 23:59:13.190267  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-283319 get pvc myclaim -o=json
I1209 23:59:15.388942  316833 retry.go:31] will retry after 2.10734669s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:1178fa5d-3dab-4632-9c08-6ca5b3712a1e ResourceVersion:783 Generation:0 CreationTimestamp:2024-12-09 23:59:13 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001ad27e0 VolumeMode:0xc001ad27f0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
E1209 23:59:15.752686  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-283319 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-283319 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [756b499a-6f48-4e4d-a30e-b06c0782e087] Pending
helpers_test.go:344: "sp-pod" [756b499a-6f48-4e4d-a30e-b06c0782e087] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [756b499a-6f48-4e4d-a30e-b06c0782e087] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.004766629s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-283319 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-283319 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-283319 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7caa0919-732c-40e4-a9da-304adb912d89] Pending
helpers_test.go:344: "sp-pod" [7caa0919-732c-40e4-a9da-304adb912d89] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7caa0919-732c-40e4-a9da-304adb912d89] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.004010728s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-283319 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (47.91s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 ssh -n functional-283319 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 cp functional-283319:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3200531828/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 ssh -n functional-283319 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 ssh -n functional-283319 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-283319 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-xkf6f" [029d9de2-933c-4785-ba6b-d9c9a98edea2] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-xkf6f" [029d9de2-933c-4785-ba6b-d9c9a98edea2] Running
E1209 23:59:20.874121  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.004780137s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-283319 exec mysql-6cdb49bbb-xkf6f -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-283319 exec mysql-6cdb49bbb-xkf6f -- mysql -ppassword -e "show databases;": exit status 1 (215.341258ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1209 23:59:25.895904  316833 retry.go:31] will retry after 1.070908103s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-283319 exec mysql-6cdb49bbb-xkf6f -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-283319 exec mysql-6cdb49bbb-xkf6f -- mysql -ppassword -e "show databases;": exit status 1 (244.310366ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1209 23:59:27.212281  316833 retry.go:31] will retry after 1.393942276s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-283319 exec mysql-6cdb49bbb-xkf6f -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-283319 exec mysql-6cdb49bbb-xkf6f -- mysql -ppassword -e "show databases;": exit status 1 (177.553549ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1209 23:59:28.784931  316833 retry.go:31] will retry after 2.835174758s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-283319 exec mysql-6cdb49bbb-xkf6f -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.37s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/316833/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 ssh "sudo cat /etc/test/nested/copy/316833/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/316833.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 ssh "sudo cat /etc/ssl/certs/316833.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/316833.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 ssh "sudo cat /usr/share/ca-certificates/316833.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3168332.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 ssh "sudo cat /etc/ssl/certs/3168332.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/3168332.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 ssh "sudo cat /usr/share/ca-certificates/3168332.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-283319 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-283319 ssh "sudo systemctl is-active docker": exit status 1 (214.444676ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-283319 ssh "sudo systemctl is-active crio": exit status 1 (217.301742ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (21.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-283319 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-283319 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-gxmp7" [c77958f7-6628-48f3-8633-1c57f1c3d779] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
E1209 23:59:10.620065  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:59:10.626532  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:59:10.637944  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:59:10.659387  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:59:10.700770  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:59:10.782348  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:59:10.944771  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:59:11.266160  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:59:11.908126  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "hello-node-6b9f76b5c7-gxmp7" [c77958f7-6628-48f3-8633-1c57f1c3d779] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 21.008740096s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (21.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "303.521621ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "50.810672ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 service list -o json
functional_test.go:1494: Took "467.226234ms" to run "out/minikube-linux-amd64 -p functional-283319 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "280.111657ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "52.189886ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.152:30372
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-283319 /tmp/TestFunctionalparallelMountCmdany-port597676112/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1733788770006893487" to /tmp/TestFunctionalparallelMountCmdany-port597676112/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1733788770006893487" to /tmp/TestFunctionalparallelMountCmdany-port597676112/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1733788770006893487" to /tmp/TestFunctionalparallelMountCmdany-port597676112/001/test-1733788770006893487
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-283319 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (211.500042ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 23:59:30.218793  316833 retry.go:31] will retry after 459.916384ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  9 23:59 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  9 23:59 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  9 23:59 test-1733788770006893487
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 ssh cat /mount-9p/test-1733788770006893487
E1209 23:59:31.116229  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-283319 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e6df6cd0-05ae-44ef-8455-cb397ae647a0] Pending
helpers_test.go:344: "busybox-mount" [e6df6cd0-05ae-44ef-8455-cb397ae647a0] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e6df6cd0-05ae-44ef-8455-cb397ae647a0] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e6df6cd0-05ae-44ef-8455-cb397ae647a0] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.004991485s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-283319 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-283319 /tmp/TestFunctionalparallelMountCmdany-port597676112/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.152:30372
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-283319 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-283319
docker.io/kindest/kindnetd:v20241007-36f62932
docker.io/kicbase/echo-server:functional-283319
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-283319 image ls --format short --alsologtostderr:
I1209 23:59:44.515828  327761 out.go:345] Setting OutFile to fd 1 ...
I1209 23:59:44.515968  327761 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:59:44.515979  327761 out.go:358] Setting ErrFile to fd 2...
I1209 23:59:44.515984  327761 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:59:44.516281  327761 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-309592/.minikube/bin
I1209 23:59:44.517033  327761 config.go:182] Loaded profile config "functional-283319": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 23:59:44.517188  327761 config.go:182] Loaded profile config "functional-283319": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 23:59:44.517626  327761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1209 23:59:44.517693  327761 main.go:141] libmachine: Launching plugin server for driver kvm2
I1209 23:59:44.534321  327761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34493
I1209 23:59:44.534834  327761 main.go:141] libmachine: () Calling .GetVersion
I1209 23:59:44.535607  327761 main.go:141] libmachine: Using API Version  1
I1209 23:59:44.535632  327761 main.go:141] libmachine: () Calling .SetConfigRaw
I1209 23:59:44.535983  327761 main.go:141] libmachine: () Calling .GetMachineName
I1209 23:59:44.536273  327761 main.go:141] libmachine: (functional-283319) Calling .GetState
I1209 23:59:44.538695  327761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1209 23:59:44.538747  327761 main.go:141] libmachine: Launching plugin server for driver kvm2
I1209 23:59:44.556048  327761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45835
I1209 23:59:44.556626  327761 main.go:141] libmachine: () Calling .GetVersion
I1209 23:59:44.557247  327761 main.go:141] libmachine: Using API Version  1
I1209 23:59:44.557298  327761 main.go:141] libmachine: () Calling .SetConfigRaw
I1209 23:59:44.557597  327761 main.go:141] libmachine: () Calling .GetMachineName
I1209 23:59:44.557808  327761 main.go:141] libmachine: (functional-283319) Calling .DriverName
I1209 23:59:44.557965  327761 ssh_runner.go:195] Run: systemctl --version
I1209 23:59:44.557993  327761 main.go:141] libmachine: (functional-283319) Calling .GetSSHHostname
I1209 23:59:44.561028  327761 main.go:141] libmachine: (functional-283319) DBG | domain functional-283319 has defined MAC address 52:54:00:84:47:cb in network mk-functional-283319
I1209 23:59:44.561418  327761 main.go:141] libmachine: (functional-283319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:47:cb", ip: ""} in network mk-functional-283319: {Iface:virbr1 ExpiryTime:2024-12-10 00:56:15 +0000 UTC Type:0 Mac:52:54:00:84:47:cb Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:functional-283319 Clientid:01:52:54:00:84:47:cb}
I1209 23:59:44.561445  327761 main.go:141] libmachine: (functional-283319) DBG | domain functional-283319 has defined IP address 192.168.39.152 and MAC address 52:54:00:84:47:cb in network mk-functional-283319
I1209 23:59:44.561645  327761 main.go:141] libmachine: (functional-283319) Calling .GetSSHPort
I1209 23:59:44.561774  327761 main.go:141] libmachine: (functional-283319) Calling .GetSSHKeyPath
I1209 23:59:44.561850  327761 main.go:141] libmachine: (functional-283319) Calling .GetSSHUsername
I1209 23:59:44.561926  327761 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-309592/.minikube/machines/functional-283319/id_rsa Username:docker}
I1209 23:59:44.662423  327761 ssh_runner.go:195] Run: sudo crictl images --output json
I1209 23:59:44.708416  327761 main.go:141] libmachine: Making call to close driver server
I1209 23:59:44.708436  327761 main.go:141] libmachine: (functional-283319) Calling .Close
I1209 23:59:44.708731  327761 main.go:141] libmachine: Successfully made call to close driver server
I1209 23:59:44.708760  327761 main.go:141] libmachine: Making call to close connection to plugin binary
I1209 23:59:44.708769  327761 main.go:141] libmachine: Making call to close driver server
I1209 23:59:44.708777  327761 main.go:141] libmachine: (functional-283319) Calling .Close
I1209 23:59:44.708785  327761 main.go:141] libmachine: (functional-283319) DBG | Closing plugin on server side
I1209 23:59:44.709043  327761 main.go:141] libmachine: (functional-283319) DBG | Closing plugin on server side
I1209 23:59:44.709086  327761 main.go:141] libmachine: Successfully made call to close driver server
I1209 23:59:44.709112  327761 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-283319 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:2e96e5 | 56.9MB |
| registry.k8s.io/kube-apiserver              | v1.31.2            | sha256:9499c9 | 28MB   |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| docker.io/kicbase/echo-server               | functional-283319  | sha256:9056ab | 2.37MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:c69fa2 | 18.6MB |
| registry.k8s.io/kube-scheduler              | v1.31.2            | sha256:847c7b | 20.1MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| docker.io/library/nginx                     | latest             | sha256:66f8bd | 72.1MB |
| docker.io/library/mysql                     | 5.7                | sha256:510733 | 138MB  |
| docker.io/library/minikube-local-cache-test | functional-283319  | sha256:8dfce5 | 991B   |
| docker.io/kindest/kindnetd                  | v20241007-36f62932 | sha256:3a5bc2 | 38.6MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| registry.k8s.io/kube-controller-manager     | v1.31.2            | sha256:0486b6 | 26.1MB |
| registry.k8s.io/kube-proxy                  | v1.31.2            | sha256:505d57 | 30.2MB |
| registry.k8s.io/pause                       | 3.10               | sha256:873ed7 | 320kB  |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-283319 image ls --format table --alsologtostderr:
I1209 23:59:45.100619  327912 out.go:345] Setting OutFile to fd 1 ...
I1209 23:59:45.100748  327912 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:59:45.100757  327912 out.go:358] Setting ErrFile to fd 2...
I1209 23:59:45.100761  327912 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:59:45.100943  327912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-309592/.minikube/bin
I1209 23:59:45.101557  327912 config.go:182] Loaded profile config "functional-283319": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 23:59:45.101658  327912 config.go:182] Loaded profile config "functional-283319": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 23:59:45.101994  327912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1209 23:59:45.102051  327912 main.go:141] libmachine: Launching plugin server for driver kvm2
I1209 23:59:45.118109  327912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42341
I1209 23:59:45.118607  327912 main.go:141] libmachine: () Calling .GetVersion
I1209 23:59:45.119219  327912 main.go:141] libmachine: Using API Version  1
I1209 23:59:45.119241  327912 main.go:141] libmachine: () Calling .SetConfigRaw
I1209 23:59:45.119677  327912 main.go:141] libmachine: () Calling .GetMachineName
I1209 23:59:45.119882  327912 main.go:141] libmachine: (functional-283319) Calling .GetState
I1209 23:59:45.122041  327912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1209 23:59:45.122098  327912 main.go:141] libmachine: Launching plugin server for driver kvm2
I1209 23:59:45.137733  327912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40677
I1209 23:59:45.138247  327912 main.go:141] libmachine: () Calling .GetVersion
I1209 23:59:45.138909  327912 main.go:141] libmachine: Using API Version  1
I1209 23:59:45.138946  327912 main.go:141] libmachine: () Calling .SetConfigRaw
I1209 23:59:45.139331  327912 main.go:141] libmachine: () Calling .GetMachineName
I1209 23:59:45.139524  327912 main.go:141] libmachine: (functional-283319) Calling .DriverName
I1209 23:59:45.139728  327912 ssh_runner.go:195] Run: systemctl --version
I1209 23:59:45.139757  327912 main.go:141] libmachine: (functional-283319) Calling .GetSSHHostname
I1209 23:59:45.143028  327912 main.go:141] libmachine: (functional-283319) DBG | domain functional-283319 has defined MAC address 52:54:00:84:47:cb in network mk-functional-283319
I1209 23:59:45.143458  327912 main.go:141] libmachine: (functional-283319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:47:cb", ip: ""} in network mk-functional-283319: {Iface:virbr1 ExpiryTime:2024-12-10 00:56:15 +0000 UTC Type:0 Mac:52:54:00:84:47:cb Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:functional-283319 Clientid:01:52:54:00:84:47:cb}
I1209 23:59:45.143482  327912 main.go:141] libmachine: (functional-283319) DBG | domain functional-283319 has defined IP address 192.168.39.152 and MAC address 52:54:00:84:47:cb in network mk-functional-283319
I1209 23:59:45.143623  327912 main.go:141] libmachine: (functional-283319) Calling .GetSSHPort
I1209 23:59:45.143807  327912 main.go:141] libmachine: (functional-283319) Calling .GetSSHKeyPath
I1209 23:59:45.143933  327912 main.go:141] libmachine: (functional-283319) Calling .GetSSHUsername
I1209 23:59:45.144119  327912 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-309592/.minikube/machines/functional-283319/id_rsa Username:docker}
I1209 23:59:45.258936  327912 ssh_runner.go:195] Run: sudo crictl images --output json
I1209 23:59:45.332917  327912 main.go:141] libmachine: Making call to close driver server
I1209 23:59:45.332945  327912 main.go:141] libmachine: (functional-283319) Calling .Close
I1209 23:59:45.333256  327912 main.go:141] libmachine: Successfully made call to close driver server
I1209 23:59:45.333286  327912 main.go:141] libmachine: Making call to close connection to plugin binary
I1209 23:59:45.333299  327912 main.go:141] libmachine: Making call to close driver server
I1209 23:59:45.333306  327912 main.go:141] libmachine: (functional-283319) Calling .Close
I1209 23:59:45.333558  327912 main.go:141] libmachine: (functional-283319) DBG | Closing plugin on server side
I1209 23:59:45.333624  327912 main.go:141] libmachine: Successfully made call to close driver server
I1209 23:59:45.333670  327912 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-283319 image ls --format json --alsologtostderr:
[{"id":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"320368"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"38600298"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-283319"],"size":"2372971"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56
536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"20102990"},{"id":"sha256:8dfce593f05b715cd71944e1db816777e54c88e3b2d87c6d11a00643c4a9e4f4","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-283319"],"size":"991"},{"id":"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"18562039"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633
bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"56909194"},{"id":"sha256:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"27972388"},{"id":"sha256:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"],"repoTags":["registry.k8s.io/kube-controller-manage
r:v1.31.2"],"size":"26147288"},{"id":"sha256:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38","repoDigests":["registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"30225833"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:66f8bdd3810c96dc5c28aec39583af731b34a2cd99471530f53c8794ed5b423e","repoDigests":["docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be"],"repoTags":["docker.io/library/nginx:latest"],"size":"72099501"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-mini
kube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-283319 image ls --format json --alsologtostderr:
I1209 23:59:44.842182  327854 out.go:345] Setting OutFile to fd 1 ...
I1209 23:59:44.842786  327854 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:59:44.842797  327854 out.go:358] Setting ErrFile to fd 2...
I1209 23:59:44.842802  327854 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:59:44.843368  327854 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-309592/.minikube/bin
I1209 23:59:44.844354  327854 config.go:182] Loaded profile config "functional-283319": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 23:59:44.844504  327854 config.go:182] Loaded profile config "functional-283319": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 23:59:44.845182  327854 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1209 23:59:44.845215  327854 main.go:141] libmachine: Launching plugin server for driver kvm2
I1209 23:59:44.861003  327854 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38737
I1209 23:59:44.861602  327854 main.go:141] libmachine: () Calling .GetVersion
I1209 23:59:44.862233  327854 main.go:141] libmachine: Using API Version  1
I1209 23:59:44.862257  327854 main.go:141] libmachine: () Calling .SetConfigRaw
I1209 23:59:44.862646  327854 main.go:141] libmachine: () Calling .GetMachineName
I1209 23:59:44.862833  327854 main.go:141] libmachine: (functional-283319) Calling .GetState
I1209 23:59:44.864675  327854 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1209 23:59:44.864723  327854 main.go:141] libmachine: Launching plugin server for driver kvm2
I1209 23:59:44.880428  327854 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36555
I1209 23:59:44.881034  327854 main.go:141] libmachine: () Calling .GetVersion
I1209 23:59:44.881600  327854 main.go:141] libmachine: Using API Version  1
I1209 23:59:44.881628  327854 main.go:141] libmachine: () Calling .SetConfigRaw
I1209 23:59:44.881970  327854 main.go:141] libmachine: () Calling .GetMachineName
I1209 23:59:44.882138  327854 main.go:141] libmachine: (functional-283319) Calling .DriverName
I1209 23:59:44.882362  327854 ssh_runner.go:195] Run: systemctl --version
I1209 23:59:44.882393  327854 main.go:141] libmachine: (functional-283319) Calling .GetSSHHostname
I1209 23:59:44.885499  327854 main.go:141] libmachine: (functional-283319) DBG | domain functional-283319 has defined MAC address 52:54:00:84:47:cb in network mk-functional-283319
I1209 23:59:44.885940  327854 main.go:141] libmachine: (functional-283319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:47:cb", ip: ""} in network mk-functional-283319: {Iface:virbr1 ExpiryTime:2024-12-10 00:56:15 +0000 UTC Type:0 Mac:52:54:00:84:47:cb Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:functional-283319 Clientid:01:52:54:00:84:47:cb}
I1209 23:59:44.885962  327854 main.go:141] libmachine: (functional-283319) DBG | domain functional-283319 has defined IP address 192.168.39.152 and MAC address 52:54:00:84:47:cb in network mk-functional-283319
I1209 23:59:44.886113  327854 main.go:141] libmachine: (functional-283319) Calling .GetSSHPort
I1209 23:59:44.886280  327854 main.go:141] libmachine: (functional-283319) Calling .GetSSHKeyPath
I1209 23:59:44.886444  327854 main.go:141] libmachine: (functional-283319) Calling .GetSSHUsername
I1209 23:59:44.886583  327854 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-309592/.minikube/machines/functional-283319/id_rsa Username:docker}
I1209 23:59:44.978609  327854 ssh_runner.go:195] Run: sudo crictl images --output json
I1209 23:59:45.035590  327854 main.go:141] libmachine: Making call to close driver server
I1209 23:59:45.035616  327854 main.go:141] libmachine: (functional-283319) Calling .Close
I1209 23:59:45.035861  327854 main.go:141] libmachine: (functional-283319) DBG | Closing plugin on server side
I1209 23:59:45.035879  327854 main.go:141] libmachine: Successfully made call to close driver server
I1209 23:59:45.035896  327854 main.go:141] libmachine: Making call to close connection to plugin binary
I1209 23:59:45.035914  327854 main.go:141] libmachine: Making call to close driver server
I1209 23:59:45.035926  327854 main.go:141] libmachine: (functional-283319) Calling .Close
I1209 23:59:45.036173  327854 main.go:141] libmachine: Successfully made call to close driver server
I1209 23:59:45.036201  327854 main.go:141] libmachine: (functional-283319) DBG | Closing plugin on server side
I1209 23:59:45.036217  327854 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-283319 image ls --format yaml --alsologtostderr:
- id: sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "56909194"
- id: sha256:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "20102990"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "18562039"
- id: sha256:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38
repoDigests:
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "30225833"
- id: sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "320368"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-283319
size: "2372971"
- id: sha256:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "38600298"
- id: sha256:8dfce593f05b715cd71944e1db816777e54c88e3b2d87c6d11a00643c4a9e4f4
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-283319
size: "991"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "27972388"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:66f8bdd3810c96dc5c28aec39583af731b34a2cd99471530f53c8794ed5b423e
repoDigests:
- docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be
repoTags:
- docker.io/library/nginx:latest
size: "72099501"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "26147288"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-283319 image ls --format yaml --alsologtostderr:
I1209 23:59:44.567897  327800 out.go:345] Setting OutFile to fd 1 ...
I1209 23:59:44.568175  327800 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:59:44.568186  327800 out.go:358] Setting ErrFile to fd 2...
I1209 23:59:44.568190  327800 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:59:44.568375  327800 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-309592/.minikube/bin
I1209 23:59:44.568938  327800 config.go:182] Loaded profile config "functional-283319": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 23:59:44.569035  327800 config.go:182] Loaded profile config "functional-283319": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 23:59:44.569401  327800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1209 23:59:44.569446  327800 main.go:141] libmachine: Launching plugin server for driver kvm2
I1209 23:59:44.585176  327800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39239
I1209 23:59:44.585802  327800 main.go:141] libmachine: () Calling .GetVersion
I1209 23:59:44.586479  327800 main.go:141] libmachine: Using API Version  1
I1209 23:59:44.586512  327800 main.go:141] libmachine: () Calling .SetConfigRaw
I1209 23:59:44.586858  327800 main.go:141] libmachine: () Calling .GetMachineName
I1209 23:59:44.587061  327800 main.go:141] libmachine: (functional-283319) Calling .GetState
I1209 23:59:44.588938  327800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1209 23:59:44.588976  327800 main.go:141] libmachine: Launching plugin server for driver kvm2
I1209 23:59:44.604101  327800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38529
I1209 23:59:44.604590  327800 main.go:141] libmachine: () Calling .GetVersion
I1209 23:59:44.605054  327800 main.go:141] libmachine: Using API Version  1
I1209 23:59:44.605080  327800 main.go:141] libmachine: () Calling .SetConfigRaw
I1209 23:59:44.605808  327800 main.go:141] libmachine: () Calling .GetMachineName
I1209 23:59:44.606177  327800 main.go:141] libmachine: (functional-283319) Calling .DriverName
I1209 23:59:44.606482  327800 ssh_runner.go:195] Run: systemctl --version
I1209 23:59:44.606523  327800 main.go:141] libmachine: (functional-283319) Calling .GetSSHHostname
I1209 23:59:44.609734  327800 main.go:141] libmachine: (functional-283319) DBG | domain functional-283319 has defined MAC address 52:54:00:84:47:cb in network mk-functional-283319
I1209 23:59:44.610106  327800 main.go:141] libmachine: (functional-283319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:47:cb", ip: ""} in network mk-functional-283319: {Iface:virbr1 ExpiryTime:2024-12-10 00:56:15 +0000 UTC Type:0 Mac:52:54:00:84:47:cb Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:functional-283319 Clientid:01:52:54:00:84:47:cb}
I1209 23:59:44.610131  327800 main.go:141] libmachine: (functional-283319) DBG | domain functional-283319 has defined IP address 192.168.39.152 and MAC address 52:54:00:84:47:cb in network mk-functional-283319
I1209 23:59:44.610325  327800 main.go:141] libmachine: (functional-283319) Calling .GetSSHPort
I1209 23:59:44.610519  327800 main.go:141] libmachine: (functional-283319) Calling .GetSSHKeyPath
I1209 23:59:44.610670  327800 main.go:141] libmachine: (functional-283319) Calling .GetSSHUsername
I1209 23:59:44.610793  327800 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-309592/.minikube/machines/functional-283319/id_rsa Username:docker}
I1209 23:59:44.723485  327800 ssh_runner.go:195] Run: sudo crictl images --output json
I1209 23:59:44.784571  327800 main.go:141] libmachine: Making call to close driver server
I1209 23:59:44.784588  327800 main.go:141] libmachine: (functional-283319) Calling .Close
I1209 23:59:44.784828  327800 main.go:141] libmachine: Successfully made call to close driver server
I1209 23:59:44.784843  327800 main.go:141] libmachine: Making call to close connection to plugin binary
I1209 23:59:44.784853  327800 main.go:141] libmachine: Making call to close driver server
I1209 23:59:44.784860  327800 main.go:141] libmachine: (functional-283319) Calling .Close
I1209 23:59:44.785146  327800 main.go:141] libmachine: Successfully made call to close driver server
I1209 23:59:44.785167  327800 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-283319 ssh pgrep buildkitd: exit status 1 (213.170304ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 image build -t localhost/my-image:functional-283319 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-283319 image build -t localhost/my-image:functional-283319 testdata/build --alsologtostderr: (5.494017079s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-283319 image build -t localhost/my-image:functional-283319 testdata/build --alsologtostderr:
I1209 23:59:44.985238  327889 out.go:345] Setting OutFile to fd 1 ...
I1209 23:59:44.985373  327889 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:59:44.985384  327889 out.go:358] Setting ErrFile to fd 2...
I1209 23:59:44.985390  327889 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:59:44.985690  327889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-309592/.minikube/bin
I1209 23:59:44.986603  327889 config.go:182] Loaded profile config "functional-283319": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 23:59:44.987274  327889 config.go:182] Loaded profile config "functional-283319": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 23:59:44.987696  327889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1209 23:59:44.987753  327889 main.go:141] libmachine: Launching plugin server for driver kvm2
I1209 23:59:45.003356  327889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44497
I1209 23:59:45.003893  327889 main.go:141] libmachine: () Calling .GetVersion
I1209 23:59:45.004521  327889 main.go:141] libmachine: Using API Version  1
I1209 23:59:45.004553  327889 main.go:141] libmachine: () Calling .SetConfigRaw
I1209 23:59:45.004896  327889 main.go:141] libmachine: () Calling .GetMachineName
I1209 23:59:45.005153  327889 main.go:141] libmachine: (functional-283319) Calling .GetState
I1209 23:59:45.007176  327889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1209 23:59:45.007230  327889 main.go:141] libmachine: Launching plugin server for driver kvm2
I1209 23:59:45.023410  327889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40013
I1209 23:59:45.024056  327889 main.go:141] libmachine: () Calling .GetVersion
I1209 23:59:45.024640  327889 main.go:141] libmachine: Using API Version  1
I1209 23:59:45.024667  327889 main.go:141] libmachine: () Calling .SetConfigRaw
I1209 23:59:45.025048  327889 main.go:141] libmachine: () Calling .GetMachineName
I1209 23:59:45.025264  327889 main.go:141] libmachine: (functional-283319) Calling .DriverName
I1209 23:59:45.025469  327889 ssh_runner.go:195] Run: systemctl --version
I1209 23:59:45.025499  327889 main.go:141] libmachine: (functional-283319) Calling .GetSSHHostname
I1209 23:59:45.028594  327889 main.go:141] libmachine: (functional-283319) DBG | domain functional-283319 has defined MAC address 52:54:00:84:47:cb in network mk-functional-283319
I1209 23:59:45.029067  327889 main.go:141] libmachine: (functional-283319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:47:cb", ip: ""} in network mk-functional-283319: {Iface:virbr1 ExpiryTime:2024-12-10 00:56:15 +0000 UTC Type:0 Mac:52:54:00:84:47:cb Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:functional-283319 Clientid:01:52:54:00:84:47:cb}
I1209 23:59:45.029106  327889 main.go:141] libmachine: (functional-283319) DBG | domain functional-283319 has defined IP address 192.168.39.152 and MAC address 52:54:00:84:47:cb in network mk-functional-283319
I1209 23:59:45.029229  327889 main.go:141] libmachine: (functional-283319) Calling .GetSSHPort
I1209 23:59:45.029422  327889 main.go:141] libmachine: (functional-283319) Calling .GetSSHKeyPath
I1209 23:59:45.029591  327889 main.go:141] libmachine: (functional-283319) Calling .GetSSHUsername
I1209 23:59:45.029738  327889 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-309592/.minikube/machines/functional-283319/id_rsa Username:docker}
I1209 23:59:45.123605  327889 build_images.go:161] Building image from path: /tmp/build.2875682088.tar
I1209 23:59:45.123691  327889 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1209 23:59:45.135747  327889 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2875682088.tar
I1209 23:59:45.142552  327889 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2875682088.tar: stat -c "%s %y" /var/lib/minikube/build/build.2875682088.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2875682088.tar': No such file or directory
I1209 23:59:45.142583  327889 ssh_runner.go:362] scp /tmp/build.2875682088.tar --> /var/lib/minikube/build/build.2875682088.tar (3072 bytes)
I1209 23:59:45.196497  327889 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2875682088
I1209 23:59:45.207718  327889 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2875682088 -xf /var/lib/minikube/build/build.2875682088.tar
I1209 23:59:45.218281  327889 containerd.go:394] Building image: /var/lib/minikube/build/build.2875682088
I1209 23:59:45.218354  327889 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2875682088 --local dockerfile=/var/lib/minikube/build/build.2875682088 --output type=image,name=localhost/my-image:functional-283319
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 2.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.1s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 1.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:ffab6a28b9b19ff4bd829ece99d24736a7fb1652ddbb25f61a7afb2bd293bba3
#8 exporting manifest sha256:ffab6a28b9b19ff4bd829ece99d24736a7fb1652ddbb25f61a7afb2bd293bba3 0.0s done
#8 exporting config sha256:c9f5fb71d1502b4a802c38f5680b1aca6dc963347802ed87384979bc36b1fa38 0.0s done
#8 naming to localhost/my-image:functional-283319 done
#8 DONE 0.2s
I1209 23:59:50.388777  327889 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2875682088 --local dockerfile=/var/lib/minikube/build/build.2875682088 --output type=image,name=localhost/my-image:functional-283319: (5.170377751s)
I1209 23:59:50.388863  327889 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2875682088
I1209 23:59:50.402247  327889 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2875682088.tar
I1209 23:59:50.416333  327889 build_images.go:217] Built localhost/my-image:functional-283319 from /tmp/build.2875682088.tar
I1209 23:59:50.416380  327889 build_images.go:133] succeeded building to: functional-283319
I1209 23:59:50.416387  327889 build_images.go:134] failed building to: 
I1209 23:59:50.416421  327889 main.go:141] libmachine: Making call to close driver server
I1209 23:59:50.416439  327889 main.go:141] libmachine: (functional-283319) Calling .Close
I1209 23:59:50.416696  327889 main.go:141] libmachine: (functional-283319) DBG | Closing plugin on server side
I1209 23:59:50.416728  327889 main.go:141] libmachine: Successfully made call to close driver server
I1209 23:59:50.416758  327889 main.go:141] libmachine: Making call to close connection to plugin binary
I1209 23:59:50.416774  327889 main.go:141] libmachine: Making call to close driver server
I1209 23:59:50.416782  327889 main.go:141] libmachine: (functional-283319) Calling .Close
I1209 23:59:50.416994  327889 main.go:141] libmachine: Successfully made call to close driver server
I1209 23:59:50.417007  327889 main.go:141] libmachine: Making call to close connection to plugin binary
I1209 23:59:50.417033  327889 main.go:141] libmachine: (functional-283319) DBG | Closing plugin on server side
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 image ls
E1209 23:59:51.597700  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.crt: no such file or directory" logger="UnhandledError"
2024/12/09 23:59:52 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.487131713s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-283319
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 image load --daemon kicbase/echo-server:functional-283319 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-283319 image load --daemon kicbase/echo-server:functional-283319 --alsologtostderr: (1.235004554s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 image load --daemon kicbase/echo-server:functional-283319 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:235: (dbg) Done: docker pull kicbase/echo-server:latest: (1.170765132s)
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-283319
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 image load --daemon kicbase/echo-server:functional-283319 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-amd64 -p functional-283319 image load --daemon kicbase/echo-server:functional-283319 --alsologtostderr: (1.008422691s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-283319 /tmp/TestFunctionalparallelMountCmdspecific-port3268388948/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-283319 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (244.930323ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 23:59:39.712750  316833 retry.go:31] will retry after 713.282786ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-283319 /tmp/TestFunctionalparallelMountCmdspecific-port3268388948/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-283319 ssh "sudo umount -f /mount-9p": exit status 1 (218.409235ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-283319 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-283319 /tmp/TestFunctionalparallelMountCmdspecific-port3268388948/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 image save kicbase/echo-server:functional-283319 /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 image rm kicbase/echo-server:functional-283319 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-283319 /tmp/TestFunctionalparallelMountCmdVerifyCleanup605547150/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-283319 /tmp/TestFunctionalparallelMountCmdVerifyCleanup605547150/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-283319 /tmp/TestFunctionalparallelMountCmdVerifyCleanup605547150/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-283319 ssh "findmnt -T" /mount1: exit status 1 (285.048015ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 23:59:41.793080  316833 retry.go:31] will retry after 380.741581ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-283319 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-283319 /tmp/TestFunctionalparallelMountCmdVerifyCleanup605547150/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-283319 /tmp/TestFunctionalparallelMountCmdVerifyCleanup605547150/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-283319 /tmp/TestFunctionalparallelMountCmdVerifyCleanup605547150/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-283319
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-283319 image save --daemon kicbase/echo-server:functional-283319 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-283319
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.61s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-283319
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-283319
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-283319
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (198.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-296720 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E1210 00:00:32.559450  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:01:54.481689  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-296720 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (3m17.537946003s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (198.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-296720 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-296720 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-296720 -- rollout status deployment/busybox: (5.836168641s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-296720 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-296720 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-296720 -- exec busybox-7dff88458-65t6h -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-296720 -- exec busybox-7dff88458-9bdxt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-296720 -- exec busybox-7dff88458-n5gdx -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-296720 -- exec busybox-7dff88458-65t6h -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-296720 -- exec busybox-7dff88458-9bdxt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-296720 -- exec busybox-7dff88458-n5gdx -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-296720 -- exec busybox-7dff88458-65t6h -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-296720 -- exec busybox-7dff88458-9bdxt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-296720 -- exec busybox-7dff88458-n5gdx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-296720 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-296720 -- exec busybox-7dff88458-65t6h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-296720 -- exec busybox-7dff88458-65t6h -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-296720 -- exec busybox-7dff88458-9bdxt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-296720 -- exec busybox-7dff88458-9bdxt -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-296720 -- exec busybox-7dff88458-n5gdx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-296720 -- exec busybox-7dff88458-n5gdx -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-296720 -v=7 --alsologtostderr
E1210 00:04:06.675916  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/functional-283319/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:04:06.682501  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/functional-283319/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:04:06.693969  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/functional-283319/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:04:06.715415  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/functional-283319/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:04:06.756866  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/functional-283319/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:04:06.838398  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/functional-283319/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:04:07.000077  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/functional-283319/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:04:07.321835  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/functional-283319/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:04:07.963785  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/functional-283319/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:04:09.245960  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/functional-283319/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:04:10.619405  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:04:11.807946  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/functional-283319/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:04:16.929771  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/functional-283319/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-296720 -v=7 --alsologtostderr: (58.710872836s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-296720 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 cp testdata/cp-test.txt ha-296720:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 ssh -n ha-296720 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 cp ha-296720:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2104379838/001/cp-test_ha-296720.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 ssh -n ha-296720 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 cp ha-296720:/home/docker/cp-test.txt ha-296720-m02:/home/docker/cp-test_ha-296720_ha-296720-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 ssh -n ha-296720 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 ssh -n ha-296720-m02 "sudo cat /home/docker/cp-test_ha-296720_ha-296720-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 cp ha-296720:/home/docker/cp-test.txt ha-296720-m03:/home/docker/cp-test_ha-296720_ha-296720-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 ssh -n ha-296720 "sudo cat /home/docker/cp-test.txt"
E1210 00:04:27.171408  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/functional-283319/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 ssh -n ha-296720-m03 "sudo cat /home/docker/cp-test_ha-296720_ha-296720-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 cp ha-296720:/home/docker/cp-test.txt ha-296720-m04:/home/docker/cp-test_ha-296720_ha-296720-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 ssh -n ha-296720 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 ssh -n ha-296720-m04 "sudo cat /home/docker/cp-test_ha-296720_ha-296720-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 cp testdata/cp-test.txt ha-296720-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 ssh -n ha-296720-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 cp ha-296720-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2104379838/001/cp-test_ha-296720-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 ssh -n ha-296720-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 cp ha-296720-m02:/home/docker/cp-test.txt ha-296720:/home/docker/cp-test_ha-296720-m02_ha-296720.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 ssh -n ha-296720-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 ssh -n ha-296720 "sudo cat /home/docker/cp-test_ha-296720-m02_ha-296720.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 cp ha-296720-m02:/home/docker/cp-test.txt ha-296720-m03:/home/docker/cp-test_ha-296720-m02_ha-296720-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 ssh -n ha-296720-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 ssh -n ha-296720-m03 "sudo cat /home/docker/cp-test_ha-296720-m02_ha-296720-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 cp ha-296720-m02:/home/docker/cp-test.txt ha-296720-m04:/home/docker/cp-test_ha-296720-m02_ha-296720-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 ssh -n ha-296720-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 ssh -n ha-296720-m04 "sudo cat /home/docker/cp-test_ha-296720-m02_ha-296720-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 cp testdata/cp-test.txt ha-296720-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 ssh -n ha-296720-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 cp ha-296720-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2104379838/001/cp-test_ha-296720-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 ssh -n ha-296720-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 cp ha-296720-m03:/home/docker/cp-test.txt ha-296720:/home/docker/cp-test_ha-296720-m03_ha-296720.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 ssh -n ha-296720-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 ssh -n ha-296720 "sudo cat /home/docker/cp-test_ha-296720-m03_ha-296720.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 cp ha-296720-m03:/home/docker/cp-test.txt ha-296720-m02:/home/docker/cp-test_ha-296720-m03_ha-296720-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 ssh -n ha-296720-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 ssh -n ha-296720-m02 "sudo cat /home/docker/cp-test_ha-296720-m03_ha-296720-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 cp ha-296720-m03:/home/docker/cp-test.txt ha-296720-m04:/home/docker/cp-test_ha-296720-m03_ha-296720-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 ssh -n ha-296720-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 ssh -n ha-296720-m04 "sudo cat /home/docker/cp-test_ha-296720-m03_ha-296720-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 cp testdata/cp-test.txt ha-296720-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 ssh -n ha-296720-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 cp ha-296720-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2104379838/001/cp-test_ha-296720-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 ssh -n ha-296720-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 cp ha-296720-m04:/home/docker/cp-test.txt ha-296720:/home/docker/cp-test_ha-296720-m04_ha-296720.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 ssh -n ha-296720-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 ssh -n ha-296720 "sudo cat /home/docker/cp-test_ha-296720-m04_ha-296720.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 cp ha-296720-m04:/home/docker/cp-test.txt ha-296720-m02:/home/docker/cp-test_ha-296720-m04_ha-296720-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 ssh -n ha-296720-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 ssh -n ha-296720-m02 "sudo cat /home/docker/cp-test_ha-296720-m04_ha-296720-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 cp ha-296720-m04:/home/docker/cp-test.txt ha-296720-m03:/home/docker/cp-test_ha-296720-m04_ha-296720-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 ssh -n ha-296720-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 ssh -n ha-296720-m03 "sudo cat /home/docker/cp-test_ha-296720-m04_ha-296720-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (92.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 node stop m02 -v=7 --alsologtostderr
E1210 00:04:38.323247  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:04:47.653155  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/functional-283319/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:05:28.615363  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/functional-283319/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-296720 node stop m02 -v=7 --alsologtostderr: (1m31.751153812s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-296720 status -v=7 --alsologtostderr: exit status 7 (675.374641ms)

                                                
                                                
-- stdout --
	ha-296720
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-296720-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-296720-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-296720-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 00:06:09.552549  332480 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:06:09.552683  332480 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:06:09.552696  332480 out.go:358] Setting ErrFile to fd 2...
	I1210 00:06:09.552703  332480 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:06:09.552932  332480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-309592/.minikube/bin
	I1210 00:06:09.553138  332480 out.go:352] Setting JSON to false
	I1210 00:06:09.553176  332480 mustload.go:65] Loading cluster: ha-296720
	I1210 00:06:09.553289  332480 notify.go:220] Checking for updates...
	I1210 00:06:09.553592  332480 config.go:182] Loaded profile config "ha-296720": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1210 00:06:09.553613  332480 status.go:174] checking status of ha-296720 ...
	I1210 00:06:09.554028  332480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1210 00:06:09.554068  332480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:06:09.574796  332480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41973
	I1210 00:06:09.575479  332480 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:06:09.576228  332480 main.go:141] libmachine: Using API Version  1
	I1210 00:06:09.576262  332480 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:06:09.576809  332480 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:06:09.577030  332480 main.go:141] libmachine: (ha-296720) Calling .GetState
	I1210 00:06:09.578879  332480 status.go:371] ha-296720 host status = "Running" (err=<nil>)
	I1210 00:06:09.578904  332480 host.go:66] Checking if "ha-296720" exists ...
	I1210 00:06:09.579275  332480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1210 00:06:09.579326  332480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:06:09.595217  332480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33145
	I1210 00:06:09.595736  332480 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:06:09.596268  332480 main.go:141] libmachine: Using API Version  1
	I1210 00:06:09.596293  332480 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:06:09.596663  332480 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:06:09.596885  332480 main.go:141] libmachine: (ha-296720) Calling .GetIP
	I1210 00:06:09.599850  332480 main.go:141] libmachine: (ha-296720) DBG | domain ha-296720 has defined MAC address 52:54:00:9f:f2:46 in network mk-ha-296720
	I1210 00:06:09.600445  332480 main.go:141] libmachine: (ha-296720) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:f2:46", ip: ""} in network mk-ha-296720: {Iface:virbr1 ExpiryTime:2024-12-10 01:00:11 +0000 UTC Type:0 Mac:52:54:00:9f:f2:46 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-296720 Clientid:01:52:54:00:9f:f2:46}
	I1210 00:06:09.600486  332480 main.go:141] libmachine: (ha-296720) DBG | domain ha-296720 has defined IP address 192.168.39.235 and MAC address 52:54:00:9f:f2:46 in network mk-ha-296720
	I1210 00:06:09.600690  332480 host.go:66] Checking if "ha-296720" exists ...
	I1210 00:06:09.601009  332480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1210 00:06:09.601067  332480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:06:09.617685  332480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46131
	I1210 00:06:09.618228  332480 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:06:09.618738  332480 main.go:141] libmachine: Using API Version  1
	I1210 00:06:09.618758  332480 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:06:09.619106  332480 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:06:09.619307  332480 main.go:141] libmachine: (ha-296720) Calling .DriverName
	I1210 00:06:09.619522  332480 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 00:06:09.619557  332480 main.go:141] libmachine: (ha-296720) Calling .GetSSHHostname
	I1210 00:06:09.622716  332480 main.go:141] libmachine: (ha-296720) DBG | domain ha-296720 has defined MAC address 52:54:00:9f:f2:46 in network mk-ha-296720
	I1210 00:06:09.623227  332480 main.go:141] libmachine: (ha-296720) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:f2:46", ip: ""} in network mk-ha-296720: {Iface:virbr1 ExpiryTime:2024-12-10 01:00:11 +0000 UTC Type:0 Mac:52:54:00:9f:f2:46 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-296720 Clientid:01:52:54:00:9f:f2:46}
	I1210 00:06:09.623267  332480 main.go:141] libmachine: (ha-296720) DBG | domain ha-296720 has defined IP address 192.168.39.235 and MAC address 52:54:00:9f:f2:46 in network mk-ha-296720
	I1210 00:06:09.623419  332480 main.go:141] libmachine: (ha-296720) Calling .GetSSHPort
	I1210 00:06:09.623620  332480 main.go:141] libmachine: (ha-296720) Calling .GetSSHKeyPath
	I1210 00:06:09.623781  332480 main.go:141] libmachine: (ha-296720) Calling .GetSSHUsername
	I1210 00:06:09.623945  332480 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-309592/.minikube/machines/ha-296720/id_rsa Username:docker}
	I1210 00:06:09.709831  332480 ssh_runner.go:195] Run: systemctl --version
	I1210 00:06:09.717167  332480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:06:09.738347  332480 kubeconfig.go:125] found "ha-296720" server: "https://192.168.39.254:8443"
	I1210 00:06:09.738409  332480 api_server.go:166] Checking apiserver status ...
	I1210 00:06:09.738473  332480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:06:09.755625  332480 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1164/cgroup
	W1210 00:06:09.765810  332480 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1164/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1210 00:06:09.765868  332480 ssh_runner.go:195] Run: ls
	I1210 00:06:09.770815  332480 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1210 00:06:09.775593  332480 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1210 00:06:09.775622  332480 status.go:463] ha-296720 apiserver status = Running (err=<nil>)
	I1210 00:06:09.775633  332480 status.go:176] ha-296720 status: &{Name:ha-296720 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 00:06:09.775681  332480 status.go:174] checking status of ha-296720-m02 ...
	I1210 00:06:09.776118  332480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1210 00:06:09.776167  332480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:06:09.791889  332480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34591
	I1210 00:06:09.792450  332480 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:06:09.793017  332480 main.go:141] libmachine: Using API Version  1
	I1210 00:06:09.793048  332480 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:06:09.793421  332480 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:06:09.793611  332480 main.go:141] libmachine: (ha-296720-m02) Calling .GetState
	I1210 00:06:09.795540  332480 status.go:371] ha-296720-m02 host status = "Stopped" (err=<nil>)
	I1210 00:06:09.795558  332480 status.go:384] host is not running, skipping remaining checks
	I1210 00:06:09.795566  332480 status.go:176] ha-296720-m02 status: &{Name:ha-296720-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 00:06:09.795589  332480 status.go:174] checking status of ha-296720-m03 ...
	I1210 00:06:09.796038  332480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1210 00:06:09.796093  332480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:06:09.812359  332480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44767
	I1210 00:06:09.812844  332480 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:06:09.813397  332480 main.go:141] libmachine: Using API Version  1
	I1210 00:06:09.813425  332480 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:06:09.813764  332480 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:06:09.813996  332480 main.go:141] libmachine: (ha-296720-m03) Calling .GetState
	I1210 00:06:09.815656  332480 status.go:371] ha-296720-m03 host status = "Running" (err=<nil>)
	I1210 00:06:09.815675  332480 host.go:66] Checking if "ha-296720-m03" exists ...
	I1210 00:06:09.815968  332480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1210 00:06:09.816024  332480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:06:09.831436  332480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46389
	I1210 00:06:09.831944  332480 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:06:09.832535  332480 main.go:141] libmachine: Using API Version  1
	I1210 00:06:09.832561  332480 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:06:09.832980  332480 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:06:09.833203  332480 main.go:141] libmachine: (ha-296720-m03) Calling .GetIP
	I1210 00:06:09.836363  332480 main.go:141] libmachine: (ha-296720-m03) DBG | domain ha-296720-m03 has defined MAC address 52:54:00:34:cf:fb in network mk-ha-296720
	I1210 00:06:09.836833  332480 main.go:141] libmachine: (ha-296720-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:cf:fb", ip: ""} in network mk-ha-296720: {Iface:virbr1 ExpiryTime:2024-12-10 01:02:13 +0000 UTC Type:0 Mac:52:54:00:34:cf:fb Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-296720-m03 Clientid:01:52:54:00:34:cf:fb}
	I1210 00:06:09.836860  332480 main.go:141] libmachine: (ha-296720-m03) DBG | domain ha-296720-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:34:cf:fb in network mk-ha-296720
	I1210 00:06:09.836983  332480 host.go:66] Checking if "ha-296720-m03" exists ...
	I1210 00:06:09.837327  332480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1210 00:06:09.837368  332480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:06:09.852905  332480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37621
	I1210 00:06:09.853469  332480 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:06:09.854083  332480 main.go:141] libmachine: Using API Version  1
	I1210 00:06:09.854109  332480 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:06:09.854440  332480 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:06:09.854643  332480 main.go:141] libmachine: (ha-296720-m03) Calling .DriverName
	I1210 00:06:09.854831  332480 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 00:06:09.854853  332480 main.go:141] libmachine: (ha-296720-m03) Calling .GetSSHHostname
	I1210 00:06:09.857887  332480 main.go:141] libmachine: (ha-296720-m03) DBG | domain ha-296720-m03 has defined MAC address 52:54:00:34:cf:fb in network mk-ha-296720
	I1210 00:06:09.858391  332480 main.go:141] libmachine: (ha-296720-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:cf:fb", ip: ""} in network mk-ha-296720: {Iface:virbr1 ExpiryTime:2024-12-10 01:02:13 +0000 UTC Type:0 Mac:52:54:00:34:cf:fb Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-296720-m03 Clientid:01:52:54:00:34:cf:fb}
	I1210 00:06:09.858422  332480 main.go:141] libmachine: (ha-296720-m03) DBG | domain ha-296720-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:34:cf:fb in network mk-ha-296720
	I1210 00:06:09.858542  332480 main.go:141] libmachine: (ha-296720-m03) Calling .GetSSHPort
	I1210 00:06:09.858750  332480 main.go:141] libmachine: (ha-296720-m03) Calling .GetSSHKeyPath
	I1210 00:06:09.858907  332480 main.go:141] libmachine: (ha-296720-m03) Calling .GetSSHUsername
	I1210 00:06:09.859057  332480 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-309592/.minikube/machines/ha-296720-m03/id_rsa Username:docker}
	I1210 00:06:09.948513  332480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:06:09.968683  332480 kubeconfig.go:125] found "ha-296720" server: "https://192.168.39.254:8443"
	I1210 00:06:09.968718  332480 api_server.go:166] Checking apiserver status ...
	I1210 00:06:09.968762  332480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:06:09.985504  332480 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1147/cgroup
	W1210 00:06:09.996061  332480 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1147/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1210 00:06:09.996153  332480 ssh_runner.go:195] Run: ls
	I1210 00:06:10.002623  332480 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1210 00:06:10.008441  332480 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1210 00:06:10.008470  332480 status.go:463] ha-296720-m03 apiserver status = Running (err=<nil>)
	I1210 00:06:10.008482  332480 status.go:176] ha-296720-m03 status: &{Name:ha-296720-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 00:06:10.008504  332480 status.go:174] checking status of ha-296720-m04 ...
	I1210 00:06:10.008817  332480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1210 00:06:10.008865  332480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:06:10.024464  332480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39735
	I1210 00:06:10.024966  332480 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:06:10.025487  332480 main.go:141] libmachine: Using API Version  1
	I1210 00:06:10.025504  332480 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:06:10.025800  332480 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:06:10.026032  332480 main.go:141] libmachine: (ha-296720-m04) Calling .GetState
	I1210 00:06:10.027582  332480 status.go:371] ha-296720-m04 host status = "Running" (err=<nil>)
	I1210 00:06:10.027605  332480 host.go:66] Checking if "ha-296720-m04" exists ...
	I1210 00:06:10.027896  332480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1210 00:06:10.027932  332480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:06:10.043308  332480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42643
	I1210 00:06:10.043871  332480 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:06:10.044453  332480 main.go:141] libmachine: Using API Version  1
	I1210 00:06:10.044481  332480 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:06:10.044830  332480 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:06:10.045129  332480 main.go:141] libmachine: (ha-296720-m04) Calling .GetIP
	I1210 00:06:10.048157  332480 main.go:141] libmachine: (ha-296720-m04) DBG | domain ha-296720-m04 has defined MAC address 52:54:00:5b:f7:00 in network mk-ha-296720
	I1210 00:06:10.048611  332480 main.go:141] libmachine: (ha-296720-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:f7:00", ip: ""} in network mk-ha-296720: {Iface:virbr1 ExpiryTime:2024-12-10 01:03:40 +0000 UTC Type:0 Mac:52:54:00:5b:f7:00 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-296720-m04 Clientid:01:52:54:00:5b:f7:00}
	I1210 00:06:10.048645  332480 main.go:141] libmachine: (ha-296720-m04) DBG | domain ha-296720-m04 has defined IP address 192.168.39.56 and MAC address 52:54:00:5b:f7:00 in network mk-ha-296720
	I1210 00:06:10.048812  332480 host.go:66] Checking if "ha-296720-m04" exists ...
	I1210 00:06:10.049245  332480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1210 00:06:10.049295  332480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:06:10.065209  332480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44381
	I1210 00:06:10.065731  332480 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:06:10.066268  332480 main.go:141] libmachine: Using API Version  1
	I1210 00:06:10.066292  332480 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:06:10.066710  332480 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:06:10.066877  332480 main.go:141] libmachine: (ha-296720-m04) Calling .DriverName
	I1210 00:06:10.067055  332480 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 00:06:10.067083  332480 main.go:141] libmachine: (ha-296720-m04) Calling .GetSSHHostname
	I1210 00:06:10.069673  332480 main.go:141] libmachine: (ha-296720-m04) DBG | domain ha-296720-m04 has defined MAC address 52:54:00:5b:f7:00 in network mk-ha-296720
	I1210 00:06:10.070254  332480 main.go:141] libmachine: (ha-296720-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:f7:00", ip: ""} in network mk-ha-296720: {Iface:virbr1 ExpiryTime:2024-12-10 01:03:40 +0000 UTC Type:0 Mac:52:54:00:5b:f7:00 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-296720-m04 Clientid:01:52:54:00:5b:f7:00}
	I1210 00:06:10.070301  332480 main.go:141] libmachine: (ha-296720-m04) DBG | domain ha-296720-m04 has defined IP address 192.168.39.56 and MAC address 52:54:00:5b:f7:00 in network mk-ha-296720
	I1210 00:06:10.070444  332480 main.go:141] libmachine: (ha-296720-m04) Calling .GetSSHPort
	I1210 00:06:10.070636  332480 main.go:141] libmachine: (ha-296720-m04) Calling .GetSSHKeyPath
	I1210 00:06:10.070830  332480 main.go:141] libmachine: (ha-296720-m04) Calling .GetSSHUsername
	I1210 00:06:10.071021  332480 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-309592/.minikube/machines/ha-296720-m04/id_rsa Username:docker}
	I1210 00:06:10.156052  332480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:06:10.174622  332480 status.go:176] ha-296720-m04 status: &{Name:ha-296720-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (92.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (40.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 node start m02 -v=7 --alsologtostderr
E1210 00:06:50.537298  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/functional-283319/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-296720 node start m02 -v=7 --alsologtostderr: (39.721123027s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (40.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (477.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-296720 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-296720 -v=7 --alsologtostderr
E1210 00:09:06.676632  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/functional-283319/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:09:10.620127  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:09:34.379104  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/functional-283319/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-296720 -v=7 --alsologtostderr: (4m36.255285506s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-296720 --wait=true -v=7 --alsologtostderr
E1210 00:14:06.676232  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/functional-283319/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:14:10.620223  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-296720 --wait=true -v=7 --alsologtostderr: (3m21.110456629s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-296720
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (477.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-296720 node delete m03 -v=7 --alsologtostderr: (6.309869855s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (274.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 stop -v=7 --alsologtostderr
E1210 00:15:33.684678  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:19:06.675545  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/functional-283319/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:19:10.619639  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-296720 stop -v=7 --alsologtostderr: (4m34.624620357s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-296720 status -v=7 --alsologtostderr: exit status 7 (114.307757ms)

                                                
                                                
-- stdout --
	ha-296720
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-296720-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-296720-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 00:19:32.272653  336983 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:19:32.272782  336983 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:19:32.272792  336983 out.go:358] Setting ErrFile to fd 2...
	I1210 00:19:32.272796  336983 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:19:32.273023  336983 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-309592/.minikube/bin
	I1210 00:19:32.273211  336983 out.go:352] Setting JSON to false
	I1210 00:19:32.273243  336983 mustload.go:65] Loading cluster: ha-296720
	I1210 00:19:32.273288  336983 notify.go:220] Checking for updates...
	I1210 00:19:32.273643  336983 config.go:182] Loaded profile config "ha-296720": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1210 00:19:32.273676  336983 status.go:174] checking status of ha-296720 ...
	I1210 00:19:32.274070  336983 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1210 00:19:32.274129  336983 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:19:32.295480  336983 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36121
	I1210 00:19:32.296036  336983 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:19:32.296725  336983 main.go:141] libmachine: Using API Version  1
	I1210 00:19:32.296748  336983 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:19:32.297132  336983 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:19:32.297376  336983 main.go:141] libmachine: (ha-296720) Calling .GetState
	I1210 00:19:32.299092  336983 status.go:371] ha-296720 host status = "Stopped" (err=<nil>)
	I1210 00:19:32.299111  336983 status.go:384] host is not running, skipping remaining checks
	I1210 00:19:32.299119  336983 status.go:176] ha-296720 status: &{Name:ha-296720 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 00:19:32.299170  336983 status.go:174] checking status of ha-296720-m02 ...
	I1210 00:19:32.299477  336983 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1210 00:19:32.299513  336983 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:19:32.314627  336983 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39321
	I1210 00:19:32.315087  336983 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:19:32.315625  336983 main.go:141] libmachine: Using API Version  1
	I1210 00:19:32.315648  336983 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:19:32.315952  336983 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:19:32.316148  336983 main.go:141] libmachine: (ha-296720-m02) Calling .GetState
	I1210 00:19:32.317866  336983 status.go:371] ha-296720-m02 host status = "Stopped" (err=<nil>)
	I1210 00:19:32.317881  336983 status.go:384] host is not running, skipping remaining checks
	I1210 00:19:32.317887  336983 status.go:176] ha-296720-m02 status: &{Name:ha-296720-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 00:19:32.317903  336983 status.go:174] checking status of ha-296720-m04 ...
	I1210 00:19:32.318196  336983 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1210 00:19:32.318238  336983 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:19:32.333246  336983 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40269
	I1210 00:19:32.333765  336983 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:19:32.334269  336983 main.go:141] libmachine: Using API Version  1
	I1210 00:19:32.334287  336983 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:19:32.334576  336983 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:19:32.334742  336983 main.go:141] libmachine: (ha-296720-m04) Calling .GetState
	I1210 00:19:32.336348  336983 status.go:371] ha-296720-m04 host status = "Stopped" (err=<nil>)
	I1210 00:19:32.336366  336983 status.go:384] host is not running, skipping remaining checks
	I1210 00:19:32.336373  336983 status.go:176] ha-296720-m04 status: &{Name:ha-296720-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (274.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (150.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-296720 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E1210 00:20:29.740713  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/functional-283319/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-296720 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m29.33364061s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (150.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (79.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-296720 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-296720 --control-plane -v=7 --alsologtostderr: (1m18.837142583s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-296720 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (79.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                    
x
+
TestJSONOutput/start/Command (82.13s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-977698 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E1210 00:24:06.678016  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/functional-283319/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:24:10.619888  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-977698 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m22.129844366s)
--- PASS: TestJSONOutput/start/Command (82.13s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-977698 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-977698 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.62s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-977698 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-977698 --output=json --user=testUser: (6.624086625s)
--- PASS: TestJSONOutput/stop/Command (6.62s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-116120 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-116120 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (65.632646ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2fe6f92e-f49d-4a7c-8609-7058659726b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-116120] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"37e433eb-3b8b-4f25-8f8f-070df5dcfc35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20062"}}
	{"specversion":"1.0","id":"537f65b9-15b5-448d-9f6f-43537c8cd972","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3131f033-fb44-4a6e-9a16-98f14a0fd04e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20062-309592/kubeconfig"}}
	{"specversion":"1.0","id":"0366614d-80de-41f5-9c7e-98ca012bf97a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-309592/.minikube"}}
	{"specversion":"1.0","id":"e41b65df-4359-40ae-9220-1b2b729fb88a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"06ef19a7-f57b-471d-92d3-ee5f22dd85eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0887a529-b792-428c-9d47-e4c1199dd1cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-116120" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-116120
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (91.33s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-103724 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-103724 --driver=kvm2  --container-runtime=containerd: (43.235832396s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-116257 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-116257 --driver=kvm2  --container-runtime=containerd: (44.95588282s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-103724
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-116257
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-116257" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-116257
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-116257: (1.004703279s)
helpers_test.go:175: Cleaning up "first-103724" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-103724
--- PASS: TestMinikubeProfile (91.33s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (30.29s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-775930 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-775930 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (29.292190208s)
--- PASS: TestMountStart/serial/StartWithMountFirst (30.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-775930 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-775930 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (31.3s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-792185 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-792185 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (30.300562682s)
--- PASS: TestMountStart/serial/StartWithMountSecond (31.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-792185 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-792185 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-775930 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.49s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-792185 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-792185 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.49s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.57s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-792185
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-792185: (1.566845689s)
--- PASS: TestMountStart/serial/Stop (1.57s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.59s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-792185
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-792185: (22.591683957s)
--- PASS: TestMountStart/serial/RestartStopped (23.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-792185 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-792185 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (118.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-717317 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E1210 00:29:06.676102  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/functional-283319/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:29:10.620229  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-717317 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m58.360654694s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (118.78s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-717317 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-717317 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-717317 -- rollout status deployment/busybox: (5.355913637s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-717317 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-717317 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-717317 -- exec busybox-7dff88458-8sqb6 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-717317 -- exec busybox-7dff88458-k7s7j -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-717317 -- exec busybox-7dff88458-8sqb6 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-717317 -- exec busybox-7dff88458-k7s7j -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-717317 -- exec busybox-7dff88458-8sqb6 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-717317 -- exec busybox-7dff88458-k7s7j -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.97s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-717317 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-717317 -- exec busybox-7dff88458-8sqb6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-717317 -- exec busybox-7dff88458-8sqb6 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-717317 -- exec busybox-7dff88458-k7s7j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-717317 -- exec busybox-7dff88458-k7s7j -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (52.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-717317 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-717317 -v 3 --alsologtostderr: (52.161064822s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (52.73s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-717317 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.58s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 cp testdata/cp-test.txt multinode-717317:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 ssh -n multinode-717317 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 cp multinode-717317:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2856839934/001/cp-test_multinode-717317.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 ssh -n multinode-717317 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 cp multinode-717317:/home/docker/cp-test.txt multinode-717317-m02:/home/docker/cp-test_multinode-717317_multinode-717317-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 ssh -n multinode-717317 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 ssh -n multinode-717317-m02 "sudo cat /home/docker/cp-test_multinode-717317_multinode-717317-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 cp multinode-717317:/home/docker/cp-test.txt multinode-717317-m03:/home/docker/cp-test_multinode-717317_multinode-717317-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 ssh -n multinode-717317 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 ssh -n multinode-717317-m03 "sudo cat /home/docker/cp-test_multinode-717317_multinode-717317-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 cp testdata/cp-test.txt multinode-717317-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 ssh -n multinode-717317-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 cp multinode-717317-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2856839934/001/cp-test_multinode-717317-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 ssh -n multinode-717317-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 cp multinode-717317-m02:/home/docker/cp-test.txt multinode-717317:/home/docker/cp-test_multinode-717317-m02_multinode-717317.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 ssh -n multinode-717317-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 ssh -n multinode-717317 "sudo cat /home/docker/cp-test_multinode-717317-m02_multinode-717317.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 cp multinode-717317-m02:/home/docker/cp-test.txt multinode-717317-m03:/home/docker/cp-test_multinode-717317-m02_multinode-717317-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 ssh -n multinode-717317-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 ssh -n multinode-717317-m03 "sudo cat /home/docker/cp-test_multinode-717317-m02_multinode-717317-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 cp testdata/cp-test.txt multinode-717317-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 ssh -n multinode-717317-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 cp multinode-717317-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2856839934/001/cp-test_multinode-717317-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 ssh -n multinode-717317-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 cp multinode-717317-m03:/home/docker/cp-test.txt multinode-717317:/home/docker/cp-test_multinode-717317-m03_multinode-717317.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 ssh -n multinode-717317-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 ssh -n multinode-717317 "sudo cat /home/docker/cp-test_multinode-717317-m03_multinode-717317.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 cp multinode-717317-m03:/home/docker/cp-test.txt multinode-717317-m02:/home/docker/cp-test_multinode-717317-m03_multinode-717317-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 ssh -n multinode-717317-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 ssh -n multinode-717317-m02 "sudo cat /home/docker/cp-test_multinode-717317-m03_multinode-717317-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.38s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-717317 node stop m03: (1.377075749s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-717317 status: exit status 7 (435.329326ms)

                                                
                                                
-- stdout --
	multinode-717317
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-717317-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-717317-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-717317 status --alsologtostderr: exit status 7 (428.640437ms)

                                                
                                                
-- stdout --
	multinode-717317
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-717317-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-717317-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 00:31:08.206904  344736 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:31:08.207025  344736 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:31:08.207062  344736 out.go:358] Setting ErrFile to fd 2...
	I1210 00:31:08.207073  344736 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:31:08.207298  344736 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-309592/.minikube/bin
	I1210 00:31:08.207494  344736 out.go:352] Setting JSON to false
	I1210 00:31:08.207526  344736 mustload.go:65] Loading cluster: multinode-717317
	I1210 00:31:08.207639  344736 notify.go:220] Checking for updates...
	I1210 00:31:08.208008  344736 config.go:182] Loaded profile config "multinode-717317": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1210 00:31:08.208033  344736 status.go:174] checking status of multinode-717317 ...
	I1210 00:31:08.208542  344736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1210 00:31:08.208616  344736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:31:08.226523  344736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39791
	I1210 00:31:08.226995  344736 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:31:08.227810  344736 main.go:141] libmachine: Using API Version  1
	I1210 00:31:08.227839  344736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:31:08.228423  344736 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:31:08.228622  344736 main.go:141] libmachine: (multinode-717317) Calling .GetState
	I1210 00:31:08.230091  344736 status.go:371] multinode-717317 host status = "Running" (err=<nil>)
	I1210 00:31:08.230121  344736 host.go:66] Checking if "multinode-717317" exists ...
	I1210 00:31:08.230556  344736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1210 00:31:08.230628  344736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:31:08.246351  344736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40477
	I1210 00:31:08.246845  344736 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:31:08.247418  344736 main.go:141] libmachine: Using API Version  1
	I1210 00:31:08.247471  344736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:31:08.247822  344736 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:31:08.247991  344736 main.go:141] libmachine: (multinode-717317) Calling .GetIP
	I1210 00:31:08.250940  344736 main.go:141] libmachine: (multinode-717317) DBG | domain multinode-717317 has defined MAC address 52:54:00:73:35:ae in network mk-multinode-717317
	I1210 00:31:08.251403  344736 main.go:141] libmachine: (multinode-717317) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:35:ae", ip: ""} in network mk-multinode-717317: {Iface:virbr1 ExpiryTime:2024-12-10 01:28:14 +0000 UTC Type:0 Mac:52:54:00:73:35:ae Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-717317 Clientid:01:52:54:00:73:35:ae}
	I1210 00:31:08.251438  344736 main.go:141] libmachine: (multinode-717317) DBG | domain multinode-717317 has defined IP address 192.168.39.24 and MAC address 52:54:00:73:35:ae in network mk-multinode-717317
	I1210 00:31:08.251539  344736 host.go:66] Checking if "multinode-717317" exists ...
	I1210 00:31:08.251864  344736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1210 00:31:08.251908  344736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:31:08.267667  344736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39799
	I1210 00:31:08.268091  344736 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:31:08.268643  344736 main.go:141] libmachine: Using API Version  1
	I1210 00:31:08.268668  344736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:31:08.268987  344736 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:31:08.269190  344736 main.go:141] libmachine: (multinode-717317) Calling .DriverName
	I1210 00:31:08.269414  344736 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 00:31:08.269446  344736 main.go:141] libmachine: (multinode-717317) Calling .GetSSHHostname
	I1210 00:31:08.272395  344736 main.go:141] libmachine: (multinode-717317) DBG | domain multinode-717317 has defined MAC address 52:54:00:73:35:ae in network mk-multinode-717317
	I1210 00:31:08.272874  344736 main.go:141] libmachine: (multinode-717317) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:35:ae", ip: ""} in network mk-multinode-717317: {Iface:virbr1 ExpiryTime:2024-12-10 01:28:14 +0000 UTC Type:0 Mac:52:54:00:73:35:ae Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-717317 Clientid:01:52:54:00:73:35:ae}
	I1210 00:31:08.272907  344736 main.go:141] libmachine: (multinode-717317) DBG | domain multinode-717317 has defined IP address 192.168.39.24 and MAC address 52:54:00:73:35:ae in network mk-multinode-717317
	I1210 00:31:08.273019  344736 main.go:141] libmachine: (multinode-717317) Calling .GetSSHPort
	I1210 00:31:08.273215  344736 main.go:141] libmachine: (multinode-717317) Calling .GetSSHKeyPath
	I1210 00:31:08.273369  344736 main.go:141] libmachine: (multinode-717317) Calling .GetSSHUsername
	I1210 00:31:08.273501  344736 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-309592/.minikube/machines/multinode-717317/id_rsa Username:docker}
	I1210 00:31:08.355342  344736 ssh_runner.go:195] Run: systemctl --version
	I1210 00:31:08.361669  344736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:31:08.377342  344736 kubeconfig.go:125] found "multinode-717317" server: "https://192.168.39.24:8443"
	I1210 00:31:08.377380  344736 api_server.go:166] Checking apiserver status ...
	I1210 00:31:08.377415  344736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:31:08.390755  344736 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1096/cgroup
	W1210 00:31:08.401064  344736 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1096/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1210 00:31:08.401140  344736 ssh_runner.go:195] Run: ls
	I1210 00:31:08.405754  344736 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I1210 00:31:08.410441  344736 api_server.go:279] https://192.168.39.24:8443/healthz returned 200:
	ok
	I1210 00:31:08.410464  344736 status.go:463] multinode-717317 apiserver status = Running (err=<nil>)
	I1210 00:31:08.410474  344736 status.go:176] multinode-717317 status: &{Name:multinode-717317 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 00:31:08.410496  344736 status.go:174] checking status of multinode-717317-m02 ...
	I1210 00:31:08.410812  344736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1210 00:31:08.410861  344736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:31:08.427959  344736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35877
	I1210 00:31:08.428418  344736 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:31:08.428945  344736 main.go:141] libmachine: Using API Version  1
	I1210 00:31:08.428966  344736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:31:08.429312  344736 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:31:08.429539  344736 main.go:141] libmachine: (multinode-717317-m02) Calling .GetState
	I1210 00:31:08.431240  344736 status.go:371] multinode-717317-m02 host status = "Running" (err=<nil>)
	I1210 00:31:08.431259  344736 host.go:66] Checking if "multinode-717317-m02" exists ...
	I1210 00:31:08.431542  344736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1210 00:31:08.431581  344736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:31:08.447528  344736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34265
	I1210 00:31:08.448013  344736 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:31:08.448503  344736 main.go:141] libmachine: Using API Version  1
	I1210 00:31:08.448522  344736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:31:08.448880  344736 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:31:08.449046  344736 main.go:141] libmachine: (multinode-717317-m02) Calling .GetIP
	I1210 00:31:08.451854  344736 main.go:141] libmachine: (multinode-717317-m02) DBG | domain multinode-717317-m02 has defined MAC address 52:54:00:6e:da:19 in network mk-multinode-717317
	I1210 00:31:08.452364  344736 main.go:141] libmachine: (multinode-717317-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:da:19", ip: ""} in network mk-multinode-717317: {Iface:virbr1 ExpiryTime:2024-12-10 01:29:21 +0000 UTC Type:0 Mac:52:54:00:6e:da:19 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-717317-m02 Clientid:01:52:54:00:6e:da:19}
	I1210 00:31:08.452386  344736 main.go:141] libmachine: (multinode-717317-m02) DBG | domain multinode-717317-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:6e:da:19 in network mk-multinode-717317
	I1210 00:31:08.452573  344736 host.go:66] Checking if "multinode-717317-m02" exists ...
	I1210 00:31:08.452879  344736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1210 00:31:08.452927  344736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:31:08.468352  344736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38533
	I1210 00:31:08.468870  344736 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:31:08.469356  344736 main.go:141] libmachine: Using API Version  1
	I1210 00:31:08.469378  344736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:31:08.469697  344736 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:31:08.469871  344736 main.go:141] libmachine: (multinode-717317-m02) Calling .DriverName
	I1210 00:31:08.470053  344736 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 00:31:08.470075  344736 main.go:141] libmachine: (multinode-717317-m02) Calling .GetSSHHostname
	I1210 00:31:08.472945  344736 main.go:141] libmachine: (multinode-717317-m02) DBG | domain multinode-717317-m02 has defined MAC address 52:54:00:6e:da:19 in network mk-multinode-717317
	I1210 00:31:08.473403  344736 main.go:141] libmachine: (multinode-717317-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:da:19", ip: ""} in network mk-multinode-717317: {Iface:virbr1 ExpiryTime:2024-12-10 01:29:21 +0000 UTC Type:0 Mac:52:54:00:6e:da:19 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-717317-m02 Clientid:01:52:54:00:6e:da:19}
	I1210 00:31:08.473441  344736 main.go:141] libmachine: (multinode-717317-m02) DBG | domain multinode-717317-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:6e:da:19 in network mk-multinode-717317
	I1210 00:31:08.473589  344736 main.go:141] libmachine: (multinode-717317-m02) Calling .GetSSHPort
	I1210 00:31:08.473745  344736 main.go:141] libmachine: (multinode-717317-m02) Calling .GetSSHKeyPath
	I1210 00:31:08.473881  344736 main.go:141] libmachine: (multinode-717317-m02) Calling .GetSSHUsername
	I1210 00:31:08.473968  344736 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-309592/.minikube/machines/multinode-717317-m02/id_rsa Username:docker}
	I1210 00:31:08.550835  344736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:31:08.565506  344736 status.go:176] multinode-717317-m02 status: &{Name:multinode-717317-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1210 00:31:08.565551  344736 status.go:174] checking status of multinode-717317-m03 ...
	I1210 00:31:08.565991  344736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1210 00:31:08.566052  344736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:31:08.582371  344736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46199
	I1210 00:31:08.582909  344736 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:31:08.583445  344736 main.go:141] libmachine: Using API Version  1
	I1210 00:31:08.583465  344736 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:31:08.583780  344736 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:31:08.583977  344736 main.go:141] libmachine: (multinode-717317-m03) Calling .GetState
	I1210 00:31:08.585444  344736 status.go:371] multinode-717317-m03 host status = "Stopped" (err=<nil>)
	I1210 00:31:08.585466  344736 status.go:384] host is not running, skipping remaining checks
	I1210 00:31:08.585472  344736 status.go:176] multinode-717317-m03 status: &{Name:multinode-717317-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (36.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-717317 node start m03 -v=7 --alsologtostderr: (35.706810231s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (36.36s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (317.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-717317
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-717317
E1210 00:32:13.688560  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:34:06.678053  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/functional-283319/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:34:10.622413  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-717317: (3m4.437529473s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-717317 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-717317 --wait=true -v=8 --alsologtostderr: (2m13.424570169s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-717317
--- PASS: TestMultiNode/serial/RestartKeepsNodes (317.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-717317 node delete m03: (1.673860926s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (183.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 stop
E1210 00:37:09.743275  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/functional-283319/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:39:06.677401  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/functional-283319/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:39:10.620398  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-717317 stop: (3m3.060394158s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-717317 status: exit status 7 (89.042226ms)

                                                
                                                
-- stdout --
	multinode-717317
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-717317-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-717317 status --alsologtostderr: exit status 7 (95.158767ms)

                                                
                                                
-- stdout --
	multinode-717317
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-717317-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 00:40:08.322752  347460 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:40:08.323085  347460 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:40:08.323099  347460 out.go:358] Setting ErrFile to fd 2...
	I1210 00:40:08.323107  347460 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:40:08.323320  347460 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-309592/.minikube/bin
	I1210 00:40:08.323538  347460 out.go:352] Setting JSON to false
	I1210 00:40:08.323580  347460 mustload.go:65] Loading cluster: multinode-717317
	I1210 00:40:08.323673  347460 notify.go:220] Checking for updates...
	I1210 00:40:08.324043  347460 config.go:182] Loaded profile config "multinode-717317": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1210 00:40:08.324069  347460 status.go:174] checking status of multinode-717317 ...
	I1210 00:40:08.324549  347460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1210 00:40:08.324604  347460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:40:08.346935  347460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37821
	I1210 00:40:08.347546  347460 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:40:08.348109  347460 main.go:141] libmachine: Using API Version  1
	I1210 00:40:08.348134  347460 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:40:08.348459  347460 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:40:08.348639  347460 main.go:141] libmachine: (multinode-717317) Calling .GetState
	I1210 00:40:08.350141  347460 status.go:371] multinode-717317 host status = "Stopped" (err=<nil>)
	I1210 00:40:08.350153  347460 status.go:384] host is not running, skipping remaining checks
	I1210 00:40:08.350158  347460 status.go:176] multinode-717317 status: &{Name:multinode-717317 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 00:40:08.350175  347460 status.go:174] checking status of multinode-717317-m02 ...
	I1210 00:40:08.350476  347460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1210 00:40:08.350517  347460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:40:08.366024  347460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34095
	I1210 00:40:08.366537  347460 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:40:08.367015  347460 main.go:141] libmachine: Using API Version  1
	I1210 00:40:08.367050  347460 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:40:08.367376  347460 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:40:08.367577  347460 main.go:141] libmachine: (multinode-717317-m02) Calling .GetState
	I1210 00:40:08.368927  347460 status.go:371] multinode-717317-m02 host status = "Stopped" (err=<nil>)
	I1210 00:40:08.368940  347460 status.go:384] host is not running, skipping remaining checks
	I1210 00:40:08.368948  347460 status.go:176] multinode-717317-m02 status: &{Name:multinode-717317-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (183.25s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (105.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-717317 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-717317 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m45.025926075s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717317 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (105.56s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (48.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-717317
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-717317-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-717317-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (68.898726ms)

                                                
                                                
-- stdout --
	* [multinode-717317-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20062
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20062-309592/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-309592/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-717317-m02' is duplicated with machine name 'multinode-717317-m02' in profile 'multinode-717317'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-717317-m03 --driver=kvm2  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-717317-m03 --driver=kvm2  --container-runtime=containerd: (47.172030009s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-717317
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-717317: exit status 80 (217.08436ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-717317 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-717317-m03 already exists in multinode-717317-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-717317-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (48.31s)

                                                
                                    
x
+
TestPreload (201.51s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-925239 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E1210 00:44:06.676808  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/functional-283319/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:44:10.620397  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-925239 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (2m3.109458647s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-925239 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-925239 image pull gcr.io/k8s-minikube/busybox: (4.522264943s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-925239
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-925239: (6.550209848s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-925239 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-925239 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (1m6.058092979s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-925239 image list
helpers_test.go:175: Cleaning up "test-preload-925239" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-925239
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-925239: (1.054703203s)
--- PASS: TestPreload (201.51s)

                                                
                                    
x
+
TestScheduledStopUnix (114.42s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-506427 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-506427 --memory=2048 --driver=kvm2  --container-runtime=containerd: (42.708144394s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-506427 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-506427 -n scheduled-stop-506427
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-506427 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1210 00:46:48.431507  316833 retry.go:31] will retry after 128.811µs: open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/scheduled-stop-506427/pid: no such file or directory
I1210 00:46:48.432683  316833 retry.go:31] will retry after 206.777µs: open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/scheduled-stop-506427/pid: no such file or directory
I1210 00:46:48.433858  316833 retry.go:31] will retry after 192.55µs: open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/scheduled-stop-506427/pid: no such file or directory
I1210 00:46:48.434947  316833 retry.go:31] will retry after 176.545µs: open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/scheduled-stop-506427/pid: no such file or directory
I1210 00:46:48.436096  316833 retry.go:31] will retry after 747.274µs: open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/scheduled-stop-506427/pid: no such file or directory
I1210 00:46:48.437224  316833 retry.go:31] will retry after 935.46µs: open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/scheduled-stop-506427/pid: no such file or directory
I1210 00:46:48.438350  316833 retry.go:31] will retry after 931.065µs: open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/scheduled-stop-506427/pid: no such file or directory
I1210 00:46:48.439472  316833 retry.go:31] will retry after 1.857962ms: open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/scheduled-stop-506427/pid: no such file or directory
I1210 00:46:48.441675  316833 retry.go:31] will retry after 3.28732ms: open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/scheduled-stop-506427/pid: no such file or directory
I1210 00:46:48.445853  316833 retry.go:31] will retry after 3.711004ms: open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/scheduled-stop-506427/pid: no such file or directory
I1210 00:46:48.450101  316833 retry.go:31] will retry after 5.722133ms: open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/scheduled-stop-506427/pid: no such file or directory
I1210 00:46:48.456319  316833 retry.go:31] will retry after 8.665957ms: open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/scheduled-stop-506427/pid: no such file or directory
I1210 00:46:48.465548  316833 retry.go:31] will retry after 12.398525ms: open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/scheduled-stop-506427/pid: no such file or directory
I1210 00:46:48.478789  316833 retry.go:31] will retry after 10.774704ms: open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/scheduled-stop-506427/pid: no such file or directory
I1210 00:46:48.490030  316833 retry.go:31] will retry after 27.712664ms: open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/scheduled-stop-506427/pid: no such file or directory
I1210 00:46:48.518377  316833 retry.go:31] will retry after 44.933519ms: open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/scheduled-stop-506427/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-506427 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-506427 -n scheduled-stop-506427
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-506427
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-506427 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-506427
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-506427: exit status 7 (76.487287ms)

                                                
                                                
-- stdout --
	scheduled-stop-506427
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-506427 -n scheduled-stop-506427
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-506427 -n scheduled-stop-506427: exit status 7 (66.747448ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-506427" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-506427
--- PASS: TestScheduledStopUnix (114.42s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (192.34s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3116751062 start -p running-upgrade-995273 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E1210 00:48:53.690486  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3116751062 start -p running-upgrade-995273 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m7.035057841s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-995273 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-995273 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m0.878907493s)
helpers_test.go:175: Cleaning up "running-upgrade-995273" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-995273
--- PASS: TestRunningBinaryUpgrade (192.34s)

                                                
                                    
x
+
TestKubernetesUpgrade (179.2s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-872032 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-872032 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m2.642759348s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-872032
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-872032: (1.676867626s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-872032 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-872032 status --format={{.Host}}: exit status 7 (98.802024ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-872032 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E1210 00:49:06.676397  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/functional-283319/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:49:10.619742  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-872032 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m12.828402822s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-872032 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-872032 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-872032 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (100.021316ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-872032] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20062
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20062-309592/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-309592/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-872032
	    minikube start -p kubernetes-upgrade-872032 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8720322 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-872032 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-872032 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-872032 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (40.55725146s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-872032" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-872032
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-872032: (1.232473502s)
--- PASS: TestKubernetesUpgrade (179.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-749885 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-749885 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (85.896983ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-749885] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20062
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20062-309592/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-309592/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (99.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-749885 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-749885 --driver=kvm2  --container-runtime=containerd: (1m39.255233965s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-749885 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (99.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (174.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1561798471 start -p stopped-upgrade-636351 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1561798471 start -p stopped-upgrade-636351 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m30.481480193s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1561798471 -p stopped-upgrade-636351 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1561798471 -p stopped-upgrade-636351 stop: (1.426164569s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-636351 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-636351 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m22.108912655s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (174.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (67.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-749885 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-749885 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (1m5.933538731s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-749885 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-749885 status -o json: exit status 2 (239.716903ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-749885","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-749885
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-749885: (1.099823277s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (67.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (29.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-749885 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-749885 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (29.415693273s)
--- PASS: TestNoKubernetes/serial/Start (29.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-714752 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-714752 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (119.107679ms)

                                                
                                                
-- stdout --
	* [false-714752] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20062
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20062-309592/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-309592/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 00:51:15.373951  354543 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:51:15.374079  354543 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:51:15.374090  354543 out.go:358] Setting ErrFile to fd 2...
	I1210 00:51:15.374096  354543 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:51:15.374315  354543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-309592/.minikube/bin
	I1210 00:51:15.374904  354543 out.go:352] Setting JSON to false
	I1210 00:51:15.375890  354543 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":30797,"bootTime":1733761078,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 00:51:15.375993  354543 start.go:139] virtualization: kvm guest
	I1210 00:51:15.378449  354543 out.go:177] * [false-714752] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1210 00:51:15.379861  354543 notify.go:220] Checking for updates...
	I1210 00:51:15.379894  354543 out.go:177]   - MINIKUBE_LOCATION=20062
	I1210 00:51:15.381392  354543 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 00:51:15.382956  354543 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-309592/kubeconfig
	I1210 00:51:15.384561  354543 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-309592/.minikube
	I1210 00:51:15.386125  354543 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 00:51:15.387577  354543 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 00:51:15.389387  354543 config.go:182] Loaded profile config "NoKubernetes-749885": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I1210 00:51:15.389517  354543 config.go:182] Loaded profile config "force-systemd-env-203083": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1210 00:51:15.389598  354543 config.go:182] Loaded profile config "stopped-upgrade-636351": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I1210 00:51:15.389716  354543 driver.go:394] Setting default libvirt URI to qemu:///system
	I1210 00:51:15.432069  354543 out.go:177] * Using the kvm2 driver based on user configuration
	I1210 00:51:15.433553  354543 start.go:297] selected driver: kvm2
	I1210 00:51:15.433568  354543 start.go:901] validating driver "kvm2" against <nil>
	I1210 00:51:15.433580  354543 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 00:51:15.435789  354543 out.go:201] 
	W1210 00:51:15.437161  354543 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1210 00:51:15.438520  354543 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-714752 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-714752

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-714752

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-714752

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-714752

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-714752

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-714752

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-714752

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-714752

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-714752

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-714752

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714752"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714752"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714752"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-714752

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714752"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714752"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-714752" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-714752" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-714752" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-714752" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-714752" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-714752" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-714752" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-714752" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714752"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714752"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714752"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714752"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714752"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-714752" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-714752" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-714752" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714752"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714752"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714752"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714752"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714752"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-714752

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714752"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714752"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714752"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714752"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714752"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714752"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714752"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714752"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714752"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714752"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714752"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714752"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714752"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714752"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714752"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714752"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714752"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-714752"

                                                
                                                
----------------------- debugLogs end: false-714752 [took: 3.67727311s] --------------------------------
helpers_test.go:175: Cleaning up "false-714752" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-714752
--- PASS: TestNetworkPlugins/group/false (3.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-749885 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-749885 "sudo systemctl is-active --quiet service kubelet": exit status 1 (226.529816ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-749885
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-749885: (2.332012152s)
--- PASS: TestNoKubernetes/serial/Stop (2.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (59.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-749885 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-749885 --driver=kvm2  --container-runtime=containerd: (59.819922469s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (59.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-749885 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-749885 "sudo systemctl is-active --quiet service kubelet": exit status 1 (216.287542ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-636351
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                    
x
+
TestPause/serial/Start (146.27s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-334441 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-334441 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (2m26.27157148s)
--- PASS: TestPause/serial/Start (146.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (90.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-714752 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-714752 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m30.701820803s)
--- PASS: TestNetworkPlugins/group/auto/Start (90.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (95.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-714752 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
E1210 00:54:06.676164  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/functional-283319/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:54:10.620433  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-714752 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m35.768907982s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (95.77s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (39.91s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-334441 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-334441 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (39.884866089s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (39.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-714752 "pgrep -a kubelet"
I1210 00:54:55.967424  316833 config.go:182] Loaded profile config "auto-714752": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-714752 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gqdt6" [45996e05-3d8c-495f-abb5-86bbce24755a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gqdt6" [45996e05-3d8c-495f-abb5-86bbce24755a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004214992s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-714752 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-714752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-714752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (87.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-714752 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-714752 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m27.417067479s)
--- PASS: TestNetworkPlugins/group/calico/Start (87.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-2hvxt" [4fa74406-df05-41c3-8eea-55e2340a0492] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005984412s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestPause/serial/Pause (0.82s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-334441 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-714752 "pgrep -a kubelet"
I1210 00:55:32.142487  316833 config.go:182] Loaded profile config "kindnet-714752": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-714752 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8jwsb" [268f8a8f-e198-426e-ba3d-690c32b680f8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8jwsb" [268f8a8f-e198-426e-ba3d-690c32b680f8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005319167s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.28s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-334441 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-334441 --output=json --layout=cluster: exit status 2 (317.574413ms)

                                                
                                                
-- stdout --
	{"Name":"pause-334441","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-334441","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-334441 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.74s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.96s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-334441 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.96s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.85s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-334441 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.85s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.6s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (91.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-714752 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-714752 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m31.086405746s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (91.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-714752 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-714752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-714752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (106.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-714752 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-714752 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m46.59828185s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (106.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-bbnmm" [850e8e04-13df-438e-874c-b062a9669463] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005428635s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (83.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-714752 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-714752 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m23.219366214s)
--- PASS: TestNetworkPlugins/group/flannel/Start (83.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-714752 "pgrep -a kubelet"
I1210 00:56:55.644979  316833 config.go:182] Loaded profile config "calico-714752": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-714752 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-pb24j" [049e58c4-2a6c-4842-9fc7-cf58de3ee98b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-pb24j" [049e58c4-2a6c-4842-9fc7-cf58de3ee98b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.007012822s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-714752 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-714752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-714752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-714752 "pgrep -a kubelet"
I1210 00:57:07.230835  316833 config.go:182] Loaded profile config "custom-flannel-714752": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-714752 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-798xc" [039e3f1b-b25e-42ac-93e8-d72b641d95ea] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-798xc" [039e3f1b-b25e-42ac-93e8-d72b641d95ea] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003563526s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-714752 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-714752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-714752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (88.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-714752 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-714752 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m28.813317515s)
--- PASS: TestNetworkPlugins/group/bridge/Start (88.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (195.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-507893 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-507893 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (3m15.378292045s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (195.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-714752 "pgrep -a kubelet"
I1210 00:57:46.991260  316833 config.go:182] Loaded profile config "enable-default-cni-714752": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-714752 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-wmn5r" [c471bc2f-b4f4-4d21-9d38-33e34c8d43ad] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-wmn5r" [c471bc2f-b4f4-4d21-9d38-33e34c8d43ad] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004937411s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-714752 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-714752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-714752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (108.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-799908 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-799908 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.2: (1m48.105849969s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (108.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-zv7nt" [cc2ad590-c64d-49aa-abb1-5234890874da] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.214730354s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-714752 "pgrep -a kubelet"
I1210 00:58:23.371747  316833 config.go:182] Loaded profile config "flannel-714752": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-714752 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context flannel-714752 replace --force -f testdata/netcat-deployment.yaml: (1.094943081s)
I1210 00:58:24.470594  316833 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1210 00:58:25.280235  316833 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-r68bx" [33ecf8fa-05b3-4e6d-8e6a-a3293ec490fa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-r68bx" [33ecf8fa-05b3-4e6d-8e6a-a3293ec490fa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004566364s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-714752 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-714752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-714752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-714752 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (90.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-803003 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.2
I1210 00:58:52.052797  316833 config.go:182] Loaded profile config "bridge-714752": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-803003 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.2: (1m30.217664922s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (90.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-714752 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-cccqh" [539dede9-da51-4bc2-a268-feee06c9d992] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-cccqh" [539dede9-da51-4bc2-a268-feee06c9d992] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.005025212s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-714752 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-714752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-714752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (91.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-140272 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.2
E1210 00:59:56.244067  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/auto-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:59:56.250578  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/auto-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:59:56.262275  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/auto-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:59:56.283763  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/auto-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:59:56.325213  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/auto-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:59:56.406684  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/auto-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:59:56.568695  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/auto-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:59:56.890754  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/auto-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:59:57.532053  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/auto-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:59:58.813777  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/auto-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:00:01.375620  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/auto-714752/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-140272 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.2: (1m31.677287387s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (91.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-799908 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f99cf5f9-1545-4ba2-9b21-c074bf170f93] Pending
helpers_test.go:344: "busybox" [f99cf5f9-1545-4ba2-9b21-c074bf170f93] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1210 01:00:06.497101  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/auto-714752/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [f99cf5f9-1545-4ba2-9b21-c074bf170f93] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.005334059s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-799908 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-799908 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-799908 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.038166351s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-799908 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-799908 --alsologtostderr -v=3
E1210 01:00:16.738622  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/auto-714752/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-799908 --alsologtostderr -v=3: (1m31.736487188s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-803003 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5866eb5a-a8a2-4f44-84a1-7f921b787668] Pending
helpers_test.go:344: "busybox" [5866eb5a-a8a2-4f44-84a1-7f921b787668] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1210 01:00:25.884367  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/kindnet-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:00:25.890841  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/kindnet-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:00:25.902395  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/kindnet-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:00:25.923863  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/kindnet-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:00:25.965332  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/kindnet-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:00:26.046863  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/kindnet-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:00:26.208464  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/kindnet-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:00:26.529889  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/kindnet-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:00:27.172275  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/kindnet-714752/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [5866eb5a-a8a2-4f44-84a1-7f921b787668] Running
E1210 01:00:28.454647  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/kindnet-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:00:31.016787  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/kindnet-714752/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.004068554s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-803003 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-803003 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-803003 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (92.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-803003 --alsologtostderr -v=3
E1210 01:00:36.139098  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/kindnet-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:00:37.220850  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/auto-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:00:46.381349  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/kindnet-714752/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-803003 --alsologtostderr -v=3: (1m32.456986325s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (92.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (12.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-507893 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4d43c52f-3800-4955-b2aa-fd5721f2a21e] Pending
helpers_test.go:344: "busybox" [4d43c52f-3800-4955-b2aa-fd5721f2a21e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4d43c52f-3800-4955-b2aa-fd5721f2a21e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 12.003457077s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-507893 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (12.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-140272 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4cd90222-1ac9-42f1-9f75-0ba4db122201] Pending
helpers_test.go:344: "busybox" [4cd90222-1ac9-42f1-9f75-0ba4db122201] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4cd90222-1ac9-42f1-9f75-0ba4db122201] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.003570035s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-140272 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-140272 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-140272 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-507893 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-507893 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-140272 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-140272 --alsologtostderr -v=3: (1m31.788296941s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (92.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-507893 --alsologtostderr -v=3
E1210 01:01:06.863238  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/kindnet-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:01:18.182850  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/auto-714752/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-507893 --alsologtostderr -v=3: (1m32.494656659s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (92.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-799908 -n no-preload-799908
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-799908 -n no-preload-799908: exit status 7 (76.600247ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-799908 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (313.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-799908 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.2
E1210 01:01:47.825317  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/kindnet-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:01:49.405372  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/calico-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:01:49.411848  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/calico-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:01:49.423298  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/calico-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:01:49.444720  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/calico-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:01:49.486235  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/calico-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:01:49.568006  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/calico-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:01:49.729934  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/calico-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:01:50.051209  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/calico-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:01:50.692600  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/calico-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:01:51.974860  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/calico-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:01:54.536370  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/calico-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:01:59.658196  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/calico-714752/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-799908 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.2: (5m13.188256201s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-799908 -n no-preload-799908
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (313.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-803003 -n embed-certs-803003
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-803003 -n embed-certs-803003: exit status 7 (79.042249ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-803003 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (298.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-803003 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.2
E1210 01:02:07.469217  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/custom-flannel-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:02:07.475681  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/custom-flannel-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:02:07.487160  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/custom-flannel-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:02:07.508682  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/custom-flannel-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:02:07.551011  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/custom-flannel-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:02:07.632543  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/custom-flannel-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:02:07.794246  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/custom-flannel-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:02:08.115980  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/custom-flannel-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:02:08.757815  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/custom-flannel-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:02:09.900431  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/calico-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:02:10.039709  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/custom-flannel-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:02:12.602092  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/custom-flannel-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:02:17.724317  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/custom-flannel-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:02:27.966560  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/custom-flannel-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:02:30.382008  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/calico-714752/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-803003 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.2: (4m57.966298358s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-803003 -n embed-certs-803003
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (298.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-140272 -n default-k8s-diff-port-140272
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-140272 -n default-k8s-diff-port-140272: exit status 7 (76.527432ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-140272 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (294.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-140272 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-140272 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.2: (4m53.904286427s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-140272 -n default-k8s-diff-port-140272
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (294.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-507893 -n old-k8s-version-507893
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-507893 -n old-k8s-version-507893: exit status 7 (77.657435ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-507893 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (186.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-507893 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
E1210 01:02:40.104695  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/auto-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:02:47.231142  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/enable-default-cni-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:02:47.237558  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/enable-default-cni-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:02:47.249060  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/enable-default-cni-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:02:47.270557  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/enable-default-cni-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:02:47.311957  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/enable-default-cni-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:02:47.393421  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/enable-default-cni-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:02:47.555174  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/enable-default-cni-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:02:47.877375  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/enable-default-cni-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:02:48.447943  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/custom-flannel-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:02:48.519704  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/enable-default-cni-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:02:49.801677  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/enable-default-cni-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:02:52.363572  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/enable-default-cni-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:02:57.485527  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/enable-default-cni-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:03:07.726888  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/enable-default-cni-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:03:09.747658  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/kindnet-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:03:11.343778  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/calico-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:03:16.772350  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/flannel-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:03:16.778836  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/flannel-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:03:16.790394  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/flannel-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:03:16.812148  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/flannel-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:03:16.853651  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/flannel-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:03:16.935194  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/flannel-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:03:17.096750  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/flannel-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:03:17.418647  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/flannel-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:03:18.061012  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/flannel-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:03:19.342984  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/flannel-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:03:21.905179  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/flannel-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:03:27.027275  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/flannel-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:03:28.209168  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/enable-default-cni-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:03:29.410316  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/custom-flannel-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:03:37.269674  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/flannel-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:03:52.346206  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/bridge-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:03:52.352671  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/bridge-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:03:52.364063  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/bridge-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:03:52.385576  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/bridge-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:03:52.427071  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/bridge-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:03:52.509230  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/bridge-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:03:52.670801  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/bridge-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:03:52.992492  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/bridge-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:03:53.633848  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/bridge-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:03:54.915163  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/bridge-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:03:57.477327  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/bridge-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:03:57.751314  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/flannel-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:04:02.599789  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/bridge-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:04:06.676132  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/functional-283319/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:04:09.171388  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/enable-default-cni-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:04:10.620001  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:04:12.842171  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/bridge-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:04:33.265769  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/calico-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:04:33.323756  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/bridge-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:04:38.712599  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/flannel-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:04:51.332290  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/custom-flannel-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:04:56.244309  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/auto-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:05:14.285863  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/bridge-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:05:23.947033  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/auto-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:05:25.884309  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/kindnet-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:05:31.093185  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/enable-default-cni-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:05:33.692334  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/addons-722117/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-507893 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (3m6.482992432s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-507893 -n old-k8s-version-507893
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (186.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-c8cjk" [05464389-2be7-41b2-954d-beb1a45d4143] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004763867s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-c8cjk" [05464389-2be7-41b2-954d-beb1a45d4143] Running
E1210 01:05:53.589253  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/kindnet-714752/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004559026s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-507893 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-507893 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-507893 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-507893 -n old-k8s-version-507893
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-507893 -n old-k8s-version-507893: exit status 2 (262.76798ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-507893 -n old-k8s-version-507893
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-507893 -n old-k8s-version-507893: exit status 2 (252.162305ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-507893 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-507893 -n old-k8s-version-507893
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-507893 -n old-k8s-version-507893
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (52.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-381361 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.2
E1210 01:06:00.634234  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/flannel-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:06:36.208113  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/bridge-714752/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:06:49.405357  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/calico-714752/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-381361 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.2: (52.218942655s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (52.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-381361 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-381361 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-381361 --alsologtostderr -v=3: (2.328038878s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-381361 -n newest-cni-381361
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-381361 -n newest-cni-381361: exit status 7 (78.501043ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-381361 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (77.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-381361 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-381361 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.2: (1m17.39378811s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-381361 -n newest-cni-381361
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (77.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-zwljw" [8011ef00-11d7-48db-82d7-1879f4fbc105] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-695b96c756-zwljw" [8011ef00-11d7-48db-82d7-1879f4fbc105] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.006115017s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-rschb" [aaf2ead8-4b41-40ef-bdee-ac20563523fb] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003838018s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-zwljw" [8011ef00-11d7-48db-82d7-1879f4fbc105] Running
E1210 01:07:07.469492  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/custom-flannel-714752/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004446195s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-799908 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-rschb" [aaf2ead8-4b41-40ef-bdee-ac20563523fb] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005009102s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-803003 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-799908 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-799908 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-799908 -n no-preload-799908
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-799908 -n no-preload-799908: exit status 2 (245.304696ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-799908 -n no-preload-799908
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-799908 -n no-preload-799908: exit status 2 (241.579899ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-799908 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-799908 -n no-preload-799908
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-799908 -n no-preload-799908
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-803003 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-803003 --alsologtostderr -v=1
E1210 01:07:17.107485  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/calico-714752/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-803003 -n embed-certs-803003
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-803003 -n embed-certs-803003: exit status 2 (298.875352ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-803003 -n embed-certs-803003
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-803003 -n embed-certs-803003: exit status 2 (278.610436ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-803003 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-803003 -n embed-certs-803003
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-803003 -n embed-certs-803003
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-9gp6w" [bf4e1f45-7a75-46c3-bbb8-87ee1291ea6e] Running
E1210 01:07:35.174580  316833 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-309592/.minikube/profiles/custom-flannel-714752/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0041302s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-9gp6w" [bf4e1f45-7a75-46c3-bbb8-87ee1291ea6e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00488809s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-140272 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-140272 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-140272 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-140272 -n default-k8s-diff-port-140272
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-140272 -n default-k8s-diff-port-140272: exit status 2 (245.968689ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-140272 -n default-k8s-diff-port-140272
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-140272 -n default-k8s-diff-port-140272: exit status 2 (248.375167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-140272 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-140272 -n default-k8s-diff-port-140272
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-140272 -n default-k8s-diff-port-140272
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-381361 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-381361 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-381361 -n newest-cni-381361
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-381361 -n newest-cni-381361: exit status 2 (237.02526ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-381361 -n newest-cni-381361
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-381361 -n newest-cni-381361: exit status 2 (239.534878ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-381361 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-381361 -n newest-cni-381361
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-381361 -n newest-cni-381361
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.35s)

                                                
                                    

Test skip (38/328)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.2/cached-images 0
15 TestDownloadOnly/v1.31.2/binaries 0
16 TestDownloadOnly/v1.31.2/kubectl 0
20 TestDownloadOnlyKic 0
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
117 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
119 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
121 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestGvisorAddon 0
178 TestImageBuild 0
205 TestKicCustomNetwork 0
206 TestKicExistingNetwork 0
207 TestKicCustomSubnet 0
208 TestKicStaticIP 0
240 TestChangeNoneUser 0
243 TestScheduledStopWindows 0
245 TestSkaffold 0
247 TestInsufficientStorage 0
251 TestMissingContainerUpgrade 0
261 TestNetworkPlugins/group/kubenet 3.21
272 TestNetworkPlugins/group/cilium 3.44
279 TestStartStop/group/disable-driver-mounts 0.17
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-714752 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-714752

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-714752

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-714752

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-714752

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-714752

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-714752

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-714752

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-714752

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-714752

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-714752

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714752"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714752"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714752"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-714752

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714752"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714752"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-714752" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-714752" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-714752" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-714752" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-714752" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-714752" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-714752" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-714752" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714752"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714752"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714752"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714752"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714752"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-714752" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-714752" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-714752" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714752"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714752"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714752"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714752"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714752"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-714752

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714752"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714752"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714752"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714752"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714752"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714752"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714752"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714752"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714752"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714752"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714752"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714752"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714752"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714752"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714752"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714752"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714752"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-714752"

                                                
                                                
----------------------- debugLogs end: kubenet-714752 [took: 3.049688783s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-714752" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-714752
--- SKIP: TestNetworkPlugins/group/kubenet (3.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-714752 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-714752

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-714752

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-714752

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-714752

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-714752

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-714752

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-714752

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-714752

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-714752

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-714752

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714752"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714752"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714752"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-714752

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714752"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714752"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-714752" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-714752" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-714752" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-714752" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-714752" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-714752" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-714752" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-714752" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714752"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714752"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714752"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714752"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714752"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-714752

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-714752

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-714752" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-714752" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-714752

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-714752

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-714752" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-714752" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-714752" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-714752" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-714752" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714752"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714752"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714752"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714752"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714752"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-714752

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714752"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714752"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714752"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714752"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714752"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714752"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714752"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714752"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714752"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714752"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714752"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714752"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714752"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714752"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714752"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714752"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714752"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-714752" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714752"

                                                
                                                
----------------------- debugLogs end: cilium-714752 [took: 3.289877177s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-714752" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-714752
--- SKIP: TestNetworkPlugins/group/cilium (3.44s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-565724" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-565724
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard