Test Report: KVM_Linux 19672

                    
                      d6d2a37830b251a8a712eec07ee86a534797346d:2024-09-20:36302
                    
                

Test fail (1/340)

Order failed test Duration
33 TestAddons/parallel/Registry 72.57
x
+
TestAddons/parallel/Registry (72.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 2.618676ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-8nlrk" [ab4d0c54-9272-4637-a6f3-5c42d97b42cf] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004042435s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-bw9ps" [fa5aee16-1db8-452c-a07d-1062044723ed] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003469775s
addons_test.go:338: (dbg) Run:  kubectl --context addons-022099 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-022099 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-022099 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.083784156s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-022099 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p addons-022099 ip
2024/09/20 21:01:25 [DEBUG] GET http://192.168.39.113:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-022099 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-022099 -n addons-022099
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-022099 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-671571                                                                     | download-only-671571 | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
	| delete  | -p download-only-785564                                                                     | download-only-785564 | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
	| delete  | -p download-only-671571                                                                     | download-only-671571 | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-825813 | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC |                     |
	|         | binary-mirror-825813                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:43433                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-825813                                                                     | binary-mirror-825813 | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
	| addons  | enable dashboard -p                                                                         | addons-022099        | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC |                     |
	|         | addons-022099                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-022099        | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC |                     |
	|         | addons-022099                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-022099 --wait=true                                                                | addons-022099        | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:51 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2  --addons=ingress                                                             |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | addons-022099 addons disable                                                                | addons-022099        | jenkins | v1.34.0 | 20 Sep 24 20:52 UTC | 20 Sep 24 20:52 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-022099        | jenkins | v1.34.0 | 20 Sep 24 21:00 UTC | 20 Sep 24 21:00 UTC |
	|         | -p addons-022099                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-022099        | jenkins | v1.34.0 | 20 Sep 24 21:00 UTC | 20 Sep 24 21:00 UTC |
	|         | addons-022099                                                                               |                      |         |         |                     |                     |
	| addons  | addons-022099 addons                                                                        | addons-022099        | jenkins | v1.34.0 | 20 Sep 24 21:00 UTC | 20 Sep 24 21:00 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-022099 addons disable                                                                | addons-022099        | jenkins | v1.34.0 | 20 Sep 24 21:00 UTC | 20 Sep 24 21:00 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-022099 addons disable                                                                | addons-022099        | jenkins | v1.34.0 | 20 Sep 24 21:00 UTC | 20 Sep 24 21:00 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-022099        | jenkins | v1.34.0 | 20 Sep 24 21:00 UTC | 20 Sep 24 21:00 UTC |
	|         | -p addons-022099                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-022099        | jenkins | v1.34.0 | 20 Sep 24 21:00 UTC | 20 Sep 24 21:00 UTC |
	|         | addons-022099                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-022099 ssh cat                                                                       | addons-022099        | jenkins | v1.34.0 | 20 Sep 24 21:00 UTC | 20 Sep 24 21:00 UTC |
	|         | /opt/local-path-provisioner/pvc-cfafe1ad-5fed-43fd-aea8-6c7af2b579b8_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-022099 addons disable                                                                | addons-022099        | jenkins | v1.34.0 | 20 Sep 24 21:00 UTC | 20 Sep 24 21:01 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-022099 ssh curl -s                                                                   | addons-022099        | jenkins | v1.34.0 | 20 Sep 24 21:00 UTC | 20 Sep 24 21:00 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-022099 ip                                                                            | addons-022099        | jenkins | v1.34.0 | 20 Sep 24 21:00 UTC | 20 Sep 24 21:00 UTC |
	| addons  | addons-022099 addons disable                                                                | addons-022099        | jenkins | v1.34.0 | 20 Sep 24 21:00 UTC | 20 Sep 24 21:00 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-022099 addons disable                                                                | addons-022099        | jenkins | v1.34.0 | 20 Sep 24 21:00 UTC | 20 Sep 24 21:00 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-022099 addons                                                                        | addons-022099        | jenkins | v1.34.0 | 20 Sep 24 21:01 UTC |                     |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-022099 ip                                                                            | addons-022099        | jenkins | v1.34.0 | 20 Sep 24 21:01 UTC | 20 Sep 24 21:01 UTC |
	| addons  | addons-022099 addons disable                                                                | addons-022099        | jenkins | v1.34.0 | 20 Sep 24 21:01 UTC | 20 Sep 24 21:01 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 20:47:52
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 20:47:52.651356   17444 out.go:345] Setting OutFile to fd 1 ...
	I0920 20:47:52.651488   17444 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:47:52.651499   17444 out.go:358] Setting ErrFile to fd 2...
	I0920 20:47:52.651505   17444 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:47:52.651675   17444 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9629/.minikube/bin
	I0920 20:47:52.652269   17444 out.go:352] Setting JSON to false
	I0920 20:47:52.653121   17444 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":1822,"bootTime":1726863451,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 20:47:52.653215   17444 start.go:139] virtualization: kvm guest
	I0920 20:47:52.655313   17444 out.go:177] * [addons-022099] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 20:47:52.656633   17444 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 20:47:52.656683   17444 notify.go:220] Checking for updates...
	I0920 20:47:52.658931   17444 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 20:47:52.660135   17444 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-9629/kubeconfig
	I0920 20:47:52.661430   17444 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9629/.minikube
	I0920 20:47:52.662775   17444 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 20:47:52.663967   17444 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 20:47:52.665198   17444 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 20:47:52.696183   17444 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 20:47:52.697584   17444 start.go:297] selected driver: kvm2
	I0920 20:47:52.697595   17444 start.go:901] validating driver "kvm2" against <nil>
	I0920 20:47:52.697608   17444 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 20:47:52.698271   17444 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 20:47:52.698357   17444 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-9629/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 20:47:52.712423   17444 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 20:47:52.712472   17444 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 20:47:52.712701   17444 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 20:47:52.712731   17444 cni.go:84] Creating CNI manager for ""
	I0920 20:47:52.712785   17444 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 20:47:52.712813   17444 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 20:47:52.712901   17444 start.go:340] cluster config:
	{Name:addons-022099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-022099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 20:47:52.713025   17444 iso.go:125] acquiring lock: {Name:mk0664e876c81c8da8805d4583236b6d02c9f72b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 20:47:52.714807   17444 out.go:177] * Starting "addons-022099" primary control-plane node in "addons-022099" cluster
	I0920 20:47:52.715951   17444 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 20:47:52.715991   17444 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-9629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0920 20:47:52.716002   17444 cache.go:56] Caching tarball of preloaded images
	I0920 20:47:52.716070   17444 preload.go:172] Found /home/jenkins/minikube-integration/19672-9629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0920 20:47:52.716082   17444 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 20:47:52.716393   17444 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/config.json ...
	I0920 20:47:52.716419   17444 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/config.json: {Name:mkceb58e9507f2f308ae81a6e34feb7ed5fcbccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:47:52.716588   17444 start.go:360] acquireMachinesLock for addons-022099: {Name:mkca85631c68da13fd2f5aa83d9d70f93de2c849 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 20:47:52.716655   17444 start.go:364] duration metric: took 50.47µs to acquireMachinesLock for "addons-022099"
	I0920 20:47:52.716677   17444 start.go:93] Provisioning new machine with config: &{Name:addons-022099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-022099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 20:47:52.716745   17444 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 20:47:52.718344   17444 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0920 20:47:52.718469   17444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 20:47:52.718507   17444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:47:52.732040   17444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35277
	I0920 20:47:52.732515   17444 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:47:52.732995   17444 main.go:141] libmachine: Using API Version  1
	I0920 20:47:52.733013   17444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:47:52.733314   17444 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:47:52.733460   17444 main.go:141] libmachine: (addons-022099) Calling .GetMachineName
	I0920 20:47:52.733584   17444 main.go:141] libmachine: (addons-022099) Calling .DriverName
	I0920 20:47:52.733739   17444 start.go:159] libmachine.API.Create for "addons-022099" (driver="kvm2")
	I0920 20:47:52.733764   17444 client.go:168] LocalClient.Create starting
	I0920 20:47:52.733814   17444 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19672-9629/.minikube/certs/ca.pem
	I0920 20:47:52.891474   17444 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19672-9629/.minikube/certs/cert.pem
	I0920 20:47:53.071207   17444 main.go:141] libmachine: Running pre-create checks...
	I0920 20:47:53.071230   17444 main.go:141] libmachine: (addons-022099) Calling .PreCreateCheck
	I0920 20:47:53.071698   17444 main.go:141] libmachine: (addons-022099) Calling .GetConfigRaw
	I0920 20:47:53.072139   17444 main.go:141] libmachine: Creating machine...
	I0920 20:47:53.072152   17444 main.go:141] libmachine: (addons-022099) Calling .Create
	I0920 20:47:53.072314   17444 main.go:141] libmachine: (addons-022099) Creating KVM machine...
	I0920 20:47:53.073512   17444 main.go:141] libmachine: (addons-022099) DBG | found existing default KVM network
	I0920 20:47:53.074330   17444 main.go:141] libmachine: (addons-022099) DBG | I0920 20:47:53.074170   17465 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001231f0}
	I0920 20:47:53.074398   17444 main.go:141] libmachine: (addons-022099) DBG | created network xml: 
	I0920 20:47:53.074420   17444 main.go:141] libmachine: (addons-022099) DBG | <network>
	I0920 20:47:53.074431   17444 main.go:141] libmachine: (addons-022099) DBG |   <name>mk-addons-022099</name>
	I0920 20:47:53.074443   17444 main.go:141] libmachine: (addons-022099) DBG |   <dns enable='no'/>
	I0920 20:47:53.074455   17444 main.go:141] libmachine: (addons-022099) DBG |   
	I0920 20:47:53.074465   17444 main.go:141] libmachine: (addons-022099) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0920 20:47:53.074478   17444 main.go:141] libmachine: (addons-022099) DBG |     <dhcp>
	I0920 20:47:53.074489   17444 main.go:141] libmachine: (addons-022099) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0920 20:47:53.074499   17444 main.go:141] libmachine: (addons-022099) DBG |     </dhcp>
	I0920 20:47:53.074509   17444 main.go:141] libmachine: (addons-022099) DBG |   </ip>
	I0920 20:47:53.074551   17444 main.go:141] libmachine: (addons-022099) DBG |   
	I0920 20:47:53.074580   17444 main.go:141] libmachine: (addons-022099) DBG | </network>
	I0920 20:47:53.074591   17444 main.go:141] libmachine: (addons-022099) DBG | 
	I0920 20:47:53.080139   17444 main.go:141] libmachine: (addons-022099) DBG | trying to create private KVM network mk-addons-022099 192.168.39.0/24...
	I0920 20:47:53.144287   17444 main.go:141] libmachine: (addons-022099) DBG | private KVM network mk-addons-022099 192.168.39.0/24 created
	I0920 20:47:53.144320   17444 main.go:141] libmachine: (addons-022099) Setting up store path in /home/jenkins/minikube-integration/19672-9629/.minikube/machines/addons-022099 ...
	I0920 20:47:53.144348   17444 main.go:141] libmachine: (addons-022099) Building disk image from file:///home/jenkins/minikube-integration/19672-9629/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 20:47:53.144366   17444 main.go:141] libmachine: (addons-022099) DBG | I0920 20:47:53.144332   17465 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19672-9629/.minikube
	I0920 20:47:53.144590   17444 main.go:141] libmachine: (addons-022099) Downloading /home/jenkins/minikube-integration/19672-9629/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19672-9629/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0920 20:47:53.392653   17444 main.go:141] libmachine: (addons-022099) DBG | I0920 20:47:53.392536   17465 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19672-9629/.minikube/machines/addons-022099/id_rsa...
	I0920 20:47:53.478010   17444 main.go:141] libmachine: (addons-022099) DBG | I0920 20:47:53.477882   17465 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19672-9629/.minikube/machines/addons-022099/addons-022099.rawdisk...
	I0920 20:47:53.478034   17444 main.go:141] libmachine: (addons-022099) DBG | Writing magic tar header
	I0920 20:47:53.478043   17444 main.go:141] libmachine: (addons-022099) DBG | Writing SSH key tar header
	I0920 20:47:53.478051   17444 main.go:141] libmachine: (addons-022099) DBG | I0920 20:47:53.478004   17465 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19672-9629/.minikube/machines/addons-022099 ...
	I0920 20:47:53.478149   17444 main.go:141] libmachine: (addons-022099) Setting executable bit set on /home/jenkins/minikube-integration/19672-9629/.minikube/machines/addons-022099 (perms=drwx------)
	I0920 20:47:53.478173   17444 main.go:141] libmachine: (addons-022099) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9629/.minikube/machines/addons-022099
	I0920 20:47:53.478185   17444 main.go:141] libmachine: (addons-022099) Setting executable bit set on /home/jenkins/minikube-integration/19672-9629/.minikube/machines (perms=drwxr-xr-x)
	I0920 20:47:53.478218   17444 main.go:141] libmachine: (addons-022099) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9629/.minikube/machines
	I0920 20:47:53.478243   17444 main.go:141] libmachine: (addons-022099) Setting executable bit set on /home/jenkins/minikube-integration/19672-9629/.minikube (perms=drwxr-xr-x)
	I0920 20:47:53.478257   17444 main.go:141] libmachine: (addons-022099) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9629/.minikube
	I0920 20:47:53.478276   17444 main.go:141] libmachine: (addons-022099) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9629
	I0920 20:47:53.478288   17444 main.go:141] libmachine: (addons-022099) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 20:47:53.478300   17444 main.go:141] libmachine: (addons-022099) DBG | Checking permissions on dir: /home/jenkins
	I0920 20:47:53.478311   17444 main.go:141] libmachine: (addons-022099) DBG | Checking permissions on dir: /home
	I0920 20:47:53.478322   17444 main.go:141] libmachine: (addons-022099) Setting executable bit set on /home/jenkins/minikube-integration/19672-9629 (perms=drwxrwxr-x)
	I0920 20:47:53.478332   17444 main.go:141] libmachine: (addons-022099) DBG | Skipping /home - not owner
	I0920 20:47:53.478348   17444 main.go:141] libmachine: (addons-022099) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 20:47:53.478359   17444 main.go:141] libmachine: (addons-022099) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 20:47:53.478371   17444 main.go:141] libmachine: (addons-022099) Creating domain...
	I0920 20:47:53.479231   17444 main.go:141] libmachine: (addons-022099) define libvirt domain using xml: 
	I0920 20:47:53.479253   17444 main.go:141] libmachine: (addons-022099) <domain type='kvm'>
	I0920 20:47:53.479280   17444 main.go:141] libmachine: (addons-022099)   <name>addons-022099</name>
	I0920 20:47:53.479301   17444 main.go:141] libmachine: (addons-022099)   <memory unit='MiB'>4000</memory>
	I0920 20:47:53.479310   17444 main.go:141] libmachine: (addons-022099)   <vcpu>2</vcpu>
	I0920 20:47:53.479319   17444 main.go:141] libmachine: (addons-022099)   <features>
	I0920 20:47:53.479325   17444 main.go:141] libmachine: (addons-022099)     <acpi/>
	I0920 20:47:53.479333   17444 main.go:141] libmachine: (addons-022099)     <apic/>
	I0920 20:47:53.479340   17444 main.go:141] libmachine: (addons-022099)     <pae/>
	I0920 20:47:53.479350   17444 main.go:141] libmachine: (addons-022099)     
	I0920 20:47:53.479355   17444 main.go:141] libmachine: (addons-022099)   </features>
	I0920 20:47:53.479360   17444 main.go:141] libmachine: (addons-022099)   <cpu mode='host-passthrough'>
	I0920 20:47:53.479365   17444 main.go:141] libmachine: (addons-022099)   
	I0920 20:47:53.479371   17444 main.go:141] libmachine: (addons-022099)   </cpu>
	I0920 20:47:53.479376   17444 main.go:141] libmachine: (addons-022099)   <os>
	I0920 20:47:53.479381   17444 main.go:141] libmachine: (addons-022099)     <type>hvm</type>
	I0920 20:47:53.479388   17444 main.go:141] libmachine: (addons-022099)     <boot dev='cdrom'/>
	I0920 20:47:53.479393   17444 main.go:141] libmachine: (addons-022099)     <boot dev='hd'/>
	I0920 20:47:53.479415   17444 main.go:141] libmachine: (addons-022099)     <bootmenu enable='no'/>
	I0920 20:47:53.479425   17444 main.go:141] libmachine: (addons-022099)   </os>
	I0920 20:47:53.479429   17444 main.go:141] libmachine: (addons-022099)   <devices>
	I0920 20:47:53.479435   17444 main.go:141] libmachine: (addons-022099)     <disk type='file' device='cdrom'>
	I0920 20:47:53.479444   17444 main.go:141] libmachine: (addons-022099)       <source file='/home/jenkins/minikube-integration/19672-9629/.minikube/machines/addons-022099/boot2docker.iso'/>
	I0920 20:47:53.479450   17444 main.go:141] libmachine: (addons-022099)       <target dev='hdc' bus='scsi'/>
	I0920 20:47:53.479455   17444 main.go:141] libmachine: (addons-022099)       <readonly/>
	I0920 20:47:53.479464   17444 main.go:141] libmachine: (addons-022099)     </disk>
	I0920 20:47:53.479471   17444 main.go:141] libmachine: (addons-022099)     <disk type='file' device='disk'>
	I0920 20:47:53.479477   17444 main.go:141] libmachine: (addons-022099)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 20:47:53.479486   17444 main.go:141] libmachine: (addons-022099)       <source file='/home/jenkins/minikube-integration/19672-9629/.minikube/machines/addons-022099/addons-022099.rawdisk'/>
	I0920 20:47:53.479491   17444 main.go:141] libmachine: (addons-022099)       <target dev='hda' bus='virtio'/>
	I0920 20:47:53.479498   17444 main.go:141] libmachine: (addons-022099)     </disk>
	I0920 20:47:53.479502   17444 main.go:141] libmachine: (addons-022099)     <interface type='network'>
	I0920 20:47:53.479510   17444 main.go:141] libmachine: (addons-022099)       <source network='mk-addons-022099'/>
	I0920 20:47:53.479514   17444 main.go:141] libmachine: (addons-022099)       <model type='virtio'/>
	I0920 20:47:53.479522   17444 main.go:141] libmachine: (addons-022099)     </interface>
	I0920 20:47:53.479526   17444 main.go:141] libmachine: (addons-022099)     <interface type='network'>
	I0920 20:47:53.479546   17444 main.go:141] libmachine: (addons-022099)       <source network='default'/>
	I0920 20:47:53.479563   17444 main.go:141] libmachine: (addons-022099)       <model type='virtio'/>
	I0920 20:47:53.479575   17444 main.go:141] libmachine: (addons-022099)     </interface>
	I0920 20:47:53.479584   17444 main.go:141] libmachine: (addons-022099)     <serial type='pty'>
	I0920 20:47:53.479593   17444 main.go:141] libmachine: (addons-022099)       <target port='0'/>
	I0920 20:47:53.479602   17444 main.go:141] libmachine: (addons-022099)     </serial>
	I0920 20:47:53.479610   17444 main.go:141] libmachine: (addons-022099)     <console type='pty'>
	I0920 20:47:53.479624   17444 main.go:141] libmachine: (addons-022099)       <target type='serial' port='0'/>
	I0920 20:47:53.479638   17444 main.go:141] libmachine: (addons-022099)     </console>
	I0920 20:47:53.479655   17444 main.go:141] libmachine: (addons-022099)     <rng model='virtio'>
	I0920 20:47:53.479668   17444 main.go:141] libmachine: (addons-022099)       <backend model='random'>/dev/random</backend>
	I0920 20:47:53.479677   17444 main.go:141] libmachine: (addons-022099)     </rng>
	I0920 20:47:53.479684   17444 main.go:141] libmachine: (addons-022099)     
	I0920 20:47:53.479690   17444 main.go:141] libmachine: (addons-022099)     
	I0920 20:47:53.479698   17444 main.go:141] libmachine: (addons-022099)   </devices>
	I0920 20:47:53.479704   17444 main.go:141] libmachine: (addons-022099) </domain>
	I0920 20:47:53.479711   17444 main.go:141] libmachine: (addons-022099) 
	I0920 20:47:53.486171   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:33:26:55 in network default
	I0920 20:47:53.486680   17444 main.go:141] libmachine: (addons-022099) Ensuring networks are active...
	I0920 20:47:53.486698   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:47:53.487302   17444 main.go:141] libmachine: (addons-022099) Ensuring network default is active
	I0920 20:47:53.487542   17444 main.go:141] libmachine: (addons-022099) Ensuring network mk-addons-022099 is active
	I0920 20:47:53.488696   17444 main.go:141] libmachine: (addons-022099) Getting domain xml...
	I0920 20:47:53.489326   17444 main.go:141] libmachine: (addons-022099) Creating domain...
	I0920 20:47:54.877228   17444 main.go:141] libmachine: (addons-022099) Waiting to get IP...
	I0920 20:47:54.878116   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:47:54.878471   17444 main.go:141] libmachine: (addons-022099) DBG | unable to find current IP address of domain addons-022099 in network mk-addons-022099
	I0920 20:47:54.878525   17444 main.go:141] libmachine: (addons-022099) DBG | I0920 20:47:54.878475   17465 retry.go:31] will retry after 263.689311ms: waiting for machine to come up
	I0920 20:47:55.143969   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:47:55.144412   17444 main.go:141] libmachine: (addons-022099) DBG | unable to find current IP address of domain addons-022099 in network mk-addons-022099
	I0920 20:47:55.144438   17444 main.go:141] libmachine: (addons-022099) DBG | I0920 20:47:55.144374   17465 retry.go:31] will retry after 238.407717ms: waiting for machine to come up
	I0920 20:47:55.384865   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:47:55.385284   17444 main.go:141] libmachine: (addons-022099) DBG | unable to find current IP address of domain addons-022099 in network mk-addons-022099
	I0920 20:47:55.385317   17444 main.go:141] libmachine: (addons-022099) DBG | I0920 20:47:55.385248   17465 retry.go:31] will retry after 394.204341ms: waiting for machine to come up
	I0920 20:47:55.780781   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:47:55.781144   17444 main.go:141] libmachine: (addons-022099) DBG | unable to find current IP address of domain addons-022099 in network mk-addons-022099
	I0920 20:47:55.781164   17444 main.go:141] libmachine: (addons-022099) DBG | I0920 20:47:55.781107   17465 retry.go:31] will retry after 385.101794ms: waiting for machine to come up
	I0920 20:47:56.167813   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:47:56.168159   17444 main.go:141] libmachine: (addons-022099) DBG | unable to find current IP address of domain addons-022099 in network mk-addons-022099
	I0920 20:47:56.168198   17444 main.go:141] libmachine: (addons-022099) DBG | I0920 20:47:56.168117   17465 retry.go:31] will retry after 587.644479ms: waiting for machine to come up
	I0920 20:47:56.756913   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:47:56.757250   17444 main.go:141] libmachine: (addons-022099) DBG | unable to find current IP address of domain addons-022099 in network mk-addons-022099
	I0920 20:47:56.757272   17444 main.go:141] libmachine: (addons-022099) DBG | I0920 20:47:56.757197   17465 retry.go:31] will retry after 738.790034ms: waiting for machine to come up
	I0920 20:47:57.497948   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:47:57.498329   17444 main.go:141] libmachine: (addons-022099) DBG | unable to find current IP address of domain addons-022099 in network mk-addons-022099
	I0920 20:47:57.498349   17444 main.go:141] libmachine: (addons-022099) DBG | I0920 20:47:57.498305   17465 retry.go:31] will retry after 817.368627ms: waiting for machine to come up
	I0920 20:47:58.316716   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:47:58.317099   17444 main.go:141] libmachine: (addons-022099) DBG | unable to find current IP address of domain addons-022099 in network mk-addons-022099
	I0920 20:47:58.317130   17444 main.go:141] libmachine: (addons-022099) DBG | I0920 20:47:58.317042   17465 retry.go:31] will retry after 902.905348ms: waiting for machine to come up
	I0920 20:47:59.221010   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:47:59.221498   17444 main.go:141] libmachine: (addons-022099) DBG | unable to find current IP address of domain addons-022099 in network mk-addons-022099
	I0920 20:47:59.221518   17444 main.go:141] libmachine: (addons-022099) DBG | I0920 20:47:59.221449   17465 retry.go:31] will retry after 1.399420119s: waiting for machine to come up
	I0920 20:48:00.623177   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:00.623665   17444 main.go:141] libmachine: (addons-022099) DBG | unable to find current IP address of domain addons-022099 in network mk-addons-022099
	I0920 20:48:00.623686   17444 main.go:141] libmachine: (addons-022099) DBG | I0920 20:48:00.623614   17465 retry.go:31] will retry after 2.156920351s: waiting for machine to come up
	I0920 20:48:02.782044   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:02.782530   17444 main.go:141] libmachine: (addons-022099) DBG | unable to find current IP address of domain addons-022099 in network mk-addons-022099
	I0920 20:48:02.782564   17444 main.go:141] libmachine: (addons-022099) DBG | I0920 20:48:02.782474   17465 retry.go:31] will retry after 2.630537468s: waiting for machine to come up
	I0920 20:48:05.417788   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:05.418232   17444 main.go:141] libmachine: (addons-022099) DBG | unable to find current IP address of domain addons-022099 in network mk-addons-022099
	I0920 20:48:05.418258   17444 main.go:141] libmachine: (addons-022099) DBG | I0920 20:48:05.418180   17465 retry.go:31] will retry after 2.656636611s: waiting for machine to come up
	I0920 20:48:08.076431   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:08.076944   17444 main.go:141] libmachine: (addons-022099) DBG | unable to find current IP address of domain addons-022099 in network mk-addons-022099
	I0920 20:48:08.076972   17444 main.go:141] libmachine: (addons-022099) DBG | I0920 20:48:08.076892   17465 retry.go:31] will retry after 3.861674327s: waiting for machine to come up
	I0920 20:48:11.942727   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:11.943121   17444 main.go:141] libmachine: (addons-022099) DBG | unable to find current IP address of domain addons-022099 in network mk-addons-022099
	I0920 20:48:11.943162   17444 main.go:141] libmachine: (addons-022099) DBG | I0920 20:48:11.943128   17465 retry.go:31] will retry after 4.953073347s: waiting for machine to come up
	I0920 20:48:16.901040   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:16.901554   17444 main.go:141] libmachine: (addons-022099) Found IP for machine: 192.168.39.113
	I0920 20:48:16.901571   17444 main.go:141] libmachine: (addons-022099) Reserving static IP address...
	I0920 20:48:16.901582   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has current primary IP address 192.168.39.113 and MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:16.901959   17444 main.go:141] libmachine: (addons-022099) DBG | unable to find host DHCP lease matching {name: "addons-022099", mac: "52:54:00:74:f4:69", ip: "192.168.39.113"} in network mk-addons-022099
	I0920 20:48:16.969453   17444 main.go:141] libmachine: (addons-022099) Reserved static IP address: 192.168.39.113
	I0920 20:48:16.969488   17444 main.go:141] libmachine: (addons-022099) Waiting for SSH to be available...
	I0920 20:48:16.969497   17444 main.go:141] libmachine: (addons-022099) DBG | Getting to WaitForSSH function...
	I0920 20:48:16.971851   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:16.972257   17444 main.go:141] libmachine: (addons-022099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:f4:69", ip: ""} in network mk-addons-022099: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:07 +0000 UTC Type:0 Mac:52:54:00:74:f4:69 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:minikube Clientid:01:52:54:00:74:f4:69}
	I0920 20:48:16.972284   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined IP address 192.168.39.113 and MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:16.972455   17444 main.go:141] libmachine: (addons-022099) DBG | Using SSH client type: external
	I0920 20:48:16.972490   17444 main.go:141] libmachine: (addons-022099) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9629/.minikube/machines/addons-022099/id_rsa (-rw-------)
	I0920 20:48:16.972545   17444 main.go:141] libmachine: (addons-022099) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.113 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9629/.minikube/machines/addons-022099/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 20:48:16.972564   17444 main.go:141] libmachine: (addons-022099) DBG | About to run SSH command:
	I0920 20:48:16.972578   17444 main.go:141] libmachine: (addons-022099) DBG | exit 0
	I0920 20:48:17.104453   17444 main.go:141] libmachine: (addons-022099) DBG | SSH cmd err, output: <nil>: 
	I0920 20:48:17.104812   17444 main.go:141] libmachine: (addons-022099) KVM machine creation complete!
	I0920 20:48:17.105000   17444 main.go:141] libmachine: (addons-022099) Calling .GetConfigRaw
	I0920 20:48:17.105582   17444 main.go:141] libmachine: (addons-022099) Calling .DriverName
	I0920 20:48:17.105811   17444 main.go:141] libmachine: (addons-022099) Calling .DriverName
	I0920 20:48:17.105959   17444 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 20:48:17.105976   17444 main.go:141] libmachine: (addons-022099) Calling .GetState
	I0920 20:48:17.107147   17444 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 20:48:17.107160   17444 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 20:48:17.107167   17444 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 20:48:17.107175   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHHostname
	I0920 20:48:17.109277   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:17.109641   17444 main.go:141] libmachine: (addons-022099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:f4:69", ip: ""} in network mk-addons-022099: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:07 +0000 UTC Type:0 Mac:52:54:00:74:f4:69 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-022099 Clientid:01:52:54:00:74:f4:69}
	I0920 20:48:17.109663   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined IP address 192.168.39.113 and MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:17.109761   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHPort
	I0920 20:48:17.109926   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHKeyPath
	I0920 20:48:17.110101   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHKeyPath
	I0920 20:48:17.110244   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHUsername
	I0920 20:48:17.110417   17444 main.go:141] libmachine: Using SSH client type: native
	I0920 20:48:17.110581   17444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0920 20:48:17.110591   17444 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 20:48:17.207502   17444 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 20:48:17.207533   17444 main.go:141] libmachine: Detecting the provisioner...
	I0920 20:48:17.207542   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHHostname
	I0920 20:48:17.210101   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:17.210469   17444 main.go:141] libmachine: (addons-022099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:f4:69", ip: ""} in network mk-addons-022099: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:07 +0000 UTC Type:0 Mac:52:54:00:74:f4:69 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-022099 Clientid:01:52:54:00:74:f4:69}
	I0920 20:48:17.210504   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined IP address 192.168.39.113 and MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:17.210607   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHPort
	I0920 20:48:17.210794   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHKeyPath
	I0920 20:48:17.210934   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHKeyPath
	I0920 20:48:17.211156   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHUsername
	I0920 20:48:17.211328   17444 main.go:141] libmachine: Using SSH client type: native
	I0920 20:48:17.211476   17444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0920 20:48:17.211485   17444 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 20:48:17.313174   17444 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 20:48:17.313231   17444 main.go:141] libmachine: found compatible host: buildroot
	I0920 20:48:17.313237   17444 main.go:141] libmachine: Provisioning with buildroot...
	I0920 20:48:17.313244   17444 main.go:141] libmachine: (addons-022099) Calling .GetMachineName
	I0920 20:48:17.313454   17444 buildroot.go:166] provisioning hostname "addons-022099"
	I0920 20:48:17.313476   17444 main.go:141] libmachine: (addons-022099) Calling .GetMachineName
	I0920 20:48:17.313660   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHHostname
	I0920 20:48:17.316209   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:17.316586   17444 main.go:141] libmachine: (addons-022099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:f4:69", ip: ""} in network mk-addons-022099: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:07 +0000 UTC Type:0 Mac:52:54:00:74:f4:69 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-022099 Clientid:01:52:54:00:74:f4:69}
	I0920 20:48:17.316624   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined IP address 192.168.39.113 and MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:17.316777   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHPort
	I0920 20:48:17.316929   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHKeyPath
	I0920 20:48:17.317093   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHKeyPath
	I0920 20:48:17.317189   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHUsername
	I0920 20:48:17.317312   17444 main.go:141] libmachine: Using SSH client type: native
	I0920 20:48:17.317464   17444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0920 20:48:17.317477   17444 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-022099 && echo "addons-022099" | sudo tee /etc/hostname
	I0920 20:48:17.430062   17444 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-022099
	
	I0920 20:48:17.430089   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHHostname
	I0920 20:48:17.432790   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:17.433112   17444 main.go:141] libmachine: (addons-022099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:f4:69", ip: ""} in network mk-addons-022099: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:07 +0000 UTC Type:0 Mac:52:54:00:74:f4:69 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-022099 Clientid:01:52:54:00:74:f4:69}
	I0920 20:48:17.433141   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined IP address 192.168.39.113 and MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:17.433309   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHPort
	I0920 20:48:17.433487   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHKeyPath
	I0920 20:48:17.433666   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHKeyPath
	I0920 20:48:17.433817   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHUsername
	I0920 20:48:17.433975   17444 main.go:141] libmachine: Using SSH client type: native
	I0920 20:48:17.434129   17444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0920 20:48:17.434144   17444 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-022099' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-022099/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-022099' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 20:48:17.540846   17444 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 20:48:17.540877   17444 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9629/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9629/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9629/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9629/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9629/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9629/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9629/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9629/.minikube}
	I0920 20:48:17.540914   17444 buildroot.go:174] setting up certificates
	I0920 20:48:17.540924   17444 provision.go:84] configureAuth start
	I0920 20:48:17.540935   17444 main.go:141] libmachine: (addons-022099) Calling .GetMachineName
	I0920 20:48:17.541228   17444 main.go:141] libmachine: (addons-022099) Calling .GetIP
	I0920 20:48:17.543586   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:17.543901   17444 main.go:141] libmachine: (addons-022099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:f4:69", ip: ""} in network mk-addons-022099: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:07 +0000 UTC Type:0 Mac:52:54:00:74:f4:69 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-022099 Clientid:01:52:54:00:74:f4:69}
	I0920 20:48:17.543928   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined IP address 192.168.39.113 and MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:17.544005   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHHostname
	I0920 20:48:17.546069   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:17.546419   17444 main.go:141] libmachine: (addons-022099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:f4:69", ip: ""} in network mk-addons-022099: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:07 +0000 UTC Type:0 Mac:52:54:00:74:f4:69 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-022099 Clientid:01:52:54:00:74:f4:69}
	I0920 20:48:17.546451   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined IP address 192.168.39.113 and MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:17.546591   17444 provision.go:143] copyHostCerts
	I0920 20:48:17.546651   17444 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9629/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9629/.minikube/ca.pem (1082 bytes)
	I0920 20:48:17.546753   17444 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9629/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9629/.minikube/cert.pem (1123 bytes)
	I0920 20:48:17.546849   17444 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9629/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9629/.minikube/key.pem (1679 bytes)
	I0920 20:48:17.546906   17444 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9629/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9629/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9629/.minikube/certs/ca-key.pem org=jenkins.addons-022099 san=[127.0.0.1 192.168.39.113 addons-022099 localhost minikube]
	I0920 20:48:17.878862   17444 provision.go:177] copyRemoteCerts
	I0920 20:48:17.878930   17444 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 20:48:17.878951   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHHostname
	I0920 20:48:17.881538   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:17.881848   17444 main.go:141] libmachine: (addons-022099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:f4:69", ip: ""} in network mk-addons-022099: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:07 +0000 UTC Type:0 Mac:52:54:00:74:f4:69 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-022099 Clientid:01:52:54:00:74:f4:69}
	I0920 20:48:17.881867   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined IP address 192.168.39.113 and MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:17.882060   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHPort
	I0920 20:48:17.882228   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHKeyPath
	I0920 20:48:17.882340   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHUsername
	I0920 20:48:17.882466   17444 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9629/.minikube/machines/addons-022099/id_rsa Username:docker}
	I0920 20:48:17.962229   17444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9629/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 20:48:17.986386   17444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9629/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 20:48:18.010601   17444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9629/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 20:48:18.033759   17444 provision.go:87] duration metric: took 492.821893ms to configureAuth
	I0920 20:48:18.033796   17444 buildroot.go:189] setting minikube options for container-runtime
	I0920 20:48:18.033961   17444 config.go:182] Loaded profile config "addons-022099": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 20:48:18.033988   17444 main.go:141] libmachine: (addons-022099) Calling .DriverName
	I0920 20:48:18.034216   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHHostname
	I0920 20:48:18.036810   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:18.037168   17444 main.go:141] libmachine: (addons-022099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:f4:69", ip: ""} in network mk-addons-022099: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:07 +0000 UTC Type:0 Mac:52:54:00:74:f4:69 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-022099 Clientid:01:52:54:00:74:f4:69}
	I0920 20:48:18.037196   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined IP address 192.168.39.113 and MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:18.037315   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHPort
	I0920 20:48:18.037484   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHKeyPath
	I0920 20:48:18.037625   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHKeyPath
	I0920 20:48:18.037756   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHUsername
	I0920 20:48:18.037912   17444 main.go:141] libmachine: Using SSH client type: native
	I0920 20:48:18.038058   17444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0920 20:48:18.038068   17444 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0920 20:48:18.137682   17444 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0920 20:48:18.137709   17444 buildroot.go:70] root file system type: tmpfs
	I0920 20:48:18.137850   17444 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0920 20:48:18.137872   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHHostname
	I0920 20:48:18.140701   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:18.141020   17444 main.go:141] libmachine: (addons-022099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:f4:69", ip: ""} in network mk-addons-022099: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:07 +0000 UTC Type:0 Mac:52:54:00:74:f4:69 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-022099 Clientid:01:52:54:00:74:f4:69}
	I0920 20:48:18.141043   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined IP address 192.168.39.113 and MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:18.141215   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHPort
	I0920 20:48:18.141407   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHKeyPath
	I0920 20:48:18.141555   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHKeyPath
	I0920 20:48:18.141706   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHUsername
	I0920 20:48:18.141888   17444 main.go:141] libmachine: Using SSH client type: native
	I0920 20:48:18.142041   17444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0920 20:48:18.142096   17444 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0920 20:48:18.254053   17444 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0920 20:48:18.254079   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHHostname
	I0920 20:48:18.256768   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:18.257072   17444 main.go:141] libmachine: (addons-022099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:f4:69", ip: ""} in network mk-addons-022099: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:07 +0000 UTC Type:0 Mac:52:54:00:74:f4:69 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-022099 Clientid:01:52:54:00:74:f4:69}
	I0920 20:48:18.257088   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined IP address 192.168.39.113 and MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:18.257282   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHPort
	I0920 20:48:18.257456   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHKeyPath
	I0920 20:48:18.257587   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHKeyPath
	I0920 20:48:18.257738   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHUsername
	I0920 20:48:18.257889   17444 main.go:141] libmachine: Using SSH client type: native
	I0920 20:48:18.258060   17444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0920 20:48:18.258082   17444 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0920 20:48:19.998614   17444 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0920 20:48:19.998644   17444 main.go:141] libmachine: Checking connection to Docker...
	I0920 20:48:19.998655   17444 main.go:141] libmachine: (addons-022099) Calling .GetURL
	I0920 20:48:19.999795   17444 main.go:141] libmachine: (addons-022099) DBG | Using libvirt version 6000000
	I0920 20:48:20.001948   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:20.002245   17444 main.go:141] libmachine: (addons-022099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:f4:69", ip: ""} in network mk-addons-022099: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:07 +0000 UTC Type:0 Mac:52:54:00:74:f4:69 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-022099 Clientid:01:52:54:00:74:f4:69}
	I0920 20:48:20.002287   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined IP address 192.168.39.113 and MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:20.002425   17444 main.go:141] libmachine: Docker is up and running!
	I0920 20:48:20.002441   17444 main.go:141] libmachine: Reticulating splines...
	I0920 20:48:20.002451   17444 client.go:171] duration metric: took 27.268673875s to LocalClient.Create
	I0920 20:48:20.002471   17444 start.go:167] duration metric: took 27.26873382s to libmachine.API.Create "addons-022099"
	I0920 20:48:20.002479   17444 start.go:293] postStartSetup for "addons-022099" (driver="kvm2")
	I0920 20:48:20.002490   17444 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 20:48:20.002505   17444 main.go:141] libmachine: (addons-022099) Calling .DriverName
	I0920 20:48:20.002745   17444 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 20:48:20.002771   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHHostname
	I0920 20:48:20.004785   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:20.005062   17444 main.go:141] libmachine: (addons-022099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:f4:69", ip: ""} in network mk-addons-022099: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:07 +0000 UTC Type:0 Mac:52:54:00:74:f4:69 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-022099 Clientid:01:52:54:00:74:f4:69}
	I0920 20:48:20.005087   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined IP address 192.168.39.113 and MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:20.005216   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHPort
	I0920 20:48:20.005387   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHKeyPath
	I0920 20:48:20.005519   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHUsername
	I0920 20:48:20.005649   17444 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9629/.minikube/machines/addons-022099/id_rsa Username:docker}
	I0920 20:48:20.087463   17444 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 20:48:20.091922   17444 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 20:48:20.091947   17444 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9629/.minikube/addons for local assets ...
	I0920 20:48:20.092012   17444 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9629/.minikube/files for local assets ...
	I0920 20:48:20.092040   17444 start.go:296] duration metric: took 89.555675ms for postStartSetup
	I0920 20:48:20.092074   17444 main.go:141] libmachine: (addons-022099) Calling .GetConfigRaw
	I0920 20:48:20.092625   17444 main.go:141] libmachine: (addons-022099) Calling .GetIP
	I0920 20:48:20.095053   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:20.095346   17444 main.go:141] libmachine: (addons-022099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:f4:69", ip: ""} in network mk-addons-022099: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:07 +0000 UTC Type:0 Mac:52:54:00:74:f4:69 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-022099 Clientid:01:52:54:00:74:f4:69}
	I0920 20:48:20.095372   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined IP address 192.168.39.113 and MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:20.095558   17444 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/config.json ...
	I0920 20:48:20.095818   17444 start.go:128] duration metric: took 27.37906093s to createHost
	I0920 20:48:20.095842   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHHostname
	I0920 20:48:20.098079   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:20.098452   17444 main.go:141] libmachine: (addons-022099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:f4:69", ip: ""} in network mk-addons-022099: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:07 +0000 UTC Type:0 Mac:52:54:00:74:f4:69 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-022099 Clientid:01:52:54:00:74:f4:69}
	I0920 20:48:20.098475   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined IP address 192.168.39.113 and MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:20.098575   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHPort
	I0920 20:48:20.098779   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHKeyPath
	I0920 20:48:20.098927   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHKeyPath
	I0920 20:48:20.099093   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHUsername
	I0920 20:48:20.099247   17444 main.go:141] libmachine: Using SSH client type: native
	I0920 20:48:20.099449   17444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0920 20:48:20.099463   17444 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 20:48:20.200923   17444 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726865300.179459445
	
	I0920 20:48:20.200943   17444 fix.go:216] guest clock: 1726865300.179459445
	I0920 20:48:20.200953   17444 fix.go:229] Guest: 2024-09-20 20:48:20.179459445 +0000 UTC Remote: 2024-09-20 20:48:20.095830728 +0000 UTC m=+27.477177235 (delta=83.628717ms)
	I0920 20:48:20.200976   17444 fix.go:200] guest clock delta is within tolerance: 83.628717ms
	I0920 20:48:20.200983   17444 start.go:83] releasing machines lock for "addons-022099", held for 27.484317092s
	I0920 20:48:20.201003   17444 main.go:141] libmachine: (addons-022099) Calling .DriverName
	I0920 20:48:20.201253   17444 main.go:141] libmachine: (addons-022099) Calling .GetIP
	I0920 20:48:20.203692   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:20.204187   17444 main.go:141] libmachine: (addons-022099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:f4:69", ip: ""} in network mk-addons-022099: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:07 +0000 UTC Type:0 Mac:52:54:00:74:f4:69 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-022099 Clientid:01:52:54:00:74:f4:69}
	I0920 20:48:20.204214   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined IP address 192.168.39.113 and MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:20.204316   17444 main.go:141] libmachine: (addons-022099) Calling .DriverName
	I0920 20:48:20.204826   17444 main.go:141] libmachine: (addons-022099) Calling .DriverName
	I0920 20:48:20.205009   17444 main.go:141] libmachine: (addons-022099) Calling .DriverName
	I0920 20:48:20.205075   17444 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 20:48:20.205135   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHHostname
	I0920 20:48:20.205197   17444 ssh_runner.go:195] Run: cat /version.json
	I0920 20:48:20.205221   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHHostname
	I0920 20:48:20.207667   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:20.207826   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:20.207971   17444 main.go:141] libmachine: (addons-022099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:f4:69", ip: ""} in network mk-addons-022099: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:07 +0000 UTC Type:0 Mac:52:54:00:74:f4:69 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-022099 Clientid:01:52:54:00:74:f4:69}
	I0920 20:48:20.207997   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined IP address 192.168.39.113 and MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:20.208089   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHPort
	I0920 20:48:20.208221   17444 main.go:141] libmachine: (addons-022099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:f4:69", ip: ""} in network mk-addons-022099: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:07 +0000 UTC Type:0 Mac:52:54:00:74:f4:69 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-022099 Clientid:01:52:54:00:74:f4:69}
	I0920 20:48:20.208236   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHKeyPath
	I0920 20:48:20.208242   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined IP address 192.168.39.113 and MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:20.208366   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHUsername
	I0920 20:48:20.208430   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHPort
	I0920 20:48:20.208507   17444 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9629/.minikube/machines/addons-022099/id_rsa Username:docker}
	I0920 20:48:20.208561   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHKeyPath
	I0920 20:48:20.208699   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHUsername
	I0920 20:48:20.208840   17444 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9629/.minikube/machines/addons-022099/id_rsa Username:docker}
	I0920 20:48:20.280878   17444 ssh_runner.go:195] Run: systemctl --version
	I0920 20:48:20.315426   17444 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 20:48:20.321021   17444 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 20:48:20.321082   17444 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 20:48:20.337009   17444 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 20:48:20.337029   17444 start.go:495] detecting cgroup driver to use...
	I0920 20:48:20.337145   17444 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 20:48:20.355079   17444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0920 20:48:20.365469   17444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 20:48:20.375864   17444 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 20:48:20.375926   17444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 20:48:20.386206   17444 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 20:48:20.396294   17444 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 20:48:20.406795   17444 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 20:48:20.416961   17444 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 20:48:20.427350   17444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 20:48:20.437464   17444 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0920 20:48:20.447222   17444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0920 20:48:20.457070   17444 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 20:48:20.466147   17444 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 20:48:20.466186   17444 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 20:48:20.476217   17444 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 20:48:20.485597   17444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 20:48:20.594673   17444 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0920 20:48:20.619271   17444 start.go:495] detecting cgroup driver to use...
	I0920 20:48:20.619358   17444 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0920 20:48:20.642195   17444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 20:48:20.657813   17444 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 20:48:20.680164   17444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 20:48:20.693203   17444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 20:48:20.706094   17444 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0920 20:48:20.738408   17444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 20:48:20.751515   17444 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 20:48:20.769305   17444 ssh_runner.go:195] Run: which cri-dockerd
	I0920 20:48:20.772942   17444 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0920 20:48:20.782168   17444 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0920 20:48:20.798372   17444 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0920 20:48:20.906743   17444 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0920 20:48:21.027941   17444 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0920 20:48:21.028057   17444 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0920 20:48:21.045582   17444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 20:48:21.156887   17444 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 20:48:23.501094   17444 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.344169076s)
	I0920 20:48:23.501168   17444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0920 20:48:23.514524   17444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 20:48:23.527054   17444 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0920 20:48:23.632986   17444 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0920 20:48:23.750477   17444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 20:48:23.864462   17444 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0920 20:48:23.880848   17444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 20:48:23.893456   17444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 20:48:23.998376   17444 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0920 20:48:24.076782   17444 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0920 20:48:24.076888   17444 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0920 20:48:24.082835   17444 start.go:563] Will wait 60s for crictl version
	I0920 20:48:24.082889   17444 ssh_runner.go:195] Run: which crictl
	I0920 20:48:24.087024   17444 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 20:48:24.126036   17444 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0920 20:48:24.126124   17444 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 20:48:24.151589   17444 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 20:48:24.174121   17444 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0920 20:48:24.174161   17444 main.go:141] libmachine: (addons-022099) Calling .GetIP
	I0920 20:48:24.176578   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:24.176855   17444 main.go:141] libmachine: (addons-022099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:f4:69", ip: ""} in network mk-addons-022099: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:07 +0000 UTC Type:0 Mac:52:54:00:74:f4:69 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-022099 Clientid:01:52:54:00:74:f4:69}
	I0920 20:48:24.176882   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined IP address 192.168.39.113 and MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:24.177049   17444 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 20:48:24.180939   17444 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 20:48:24.192722   17444 kubeadm.go:883] updating cluster {Name:addons-022099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-022099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 20:48:24.192811   17444 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 20:48:24.192856   17444 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 20:48:24.206971   17444 docker.go:685] Got preloaded images: 
	I0920 20:48:24.206995   17444 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I0920 20:48:24.207038   17444 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0920 20:48:24.216283   17444 ssh_runner.go:195] Run: which lz4
	I0920 20:48:24.219998   17444 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 20:48:24.223776   17444 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 20:48:24.223801   17444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I0920 20:48:25.350491   17444 docker.go:649] duration metric: took 1.130517203s to copy over tarball
	I0920 20:48:25.350557   17444 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 20:48:27.152274   17444 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.801688667s)
	I0920 20:48:27.152307   17444 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 20:48:27.190645   17444 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0920 20:48:27.200419   17444 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0920 20:48:27.217204   17444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 20:48:27.323438   17444 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 20:48:30.859445   17444 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.53597215s)
	I0920 20:48:30.859557   17444 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 20:48:30.876751   17444 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 20:48:30.876779   17444 cache_images.go:84] Images are preloaded, skipping loading
	I0920 20:48:30.876789   17444 kubeadm.go:934] updating node { 192.168.39.113 8443 v1.31.1 docker true true} ...
	I0920 20:48:30.876893   17444 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-022099 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.113
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-022099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 20:48:30.876949   17444 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0920 20:48:30.928932   17444 cni.go:84] Creating CNI manager for ""
	I0920 20:48:30.928967   17444 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 20:48:30.928979   17444 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 20:48:30.929004   17444 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.113 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-022099 NodeName:addons-022099 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.113"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.113 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 20:48:30.929157   17444 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.113
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-022099"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.113
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.113"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 20:48:30.929216   17444 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 20:48:30.938812   17444 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 20:48:30.938885   17444 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 20:48:30.949761   17444 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0920 20:48:30.968372   17444 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 20:48:30.985977   17444 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0920 20:48:31.003913   17444 ssh_runner.go:195] Run: grep 192.168.39.113	control-plane.minikube.internal$ /etc/hosts
	I0920 20:48:31.007628   17444 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.113	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 20:48:31.019817   17444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 20:48:31.134580   17444 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 20:48:31.156323   17444 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099 for IP: 192.168.39.113
	I0920 20:48:31.156350   17444 certs.go:194] generating shared ca certs ...
	I0920 20:48:31.156371   17444 certs.go:226] acquiring lock for ca certs: {Name:mk2bce2ba46692132172d96d6024b7247fb1054c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:31.156581   17444 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9629/.minikube/ca.key
	I0920 20:48:31.234253   17444 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9629/.minikube/ca.crt ...
	I0920 20:48:31.234279   17444 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9629/.minikube/ca.crt: {Name:mk6693394c1f5d482e05bc7411e6e8737cc8a410 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:31.234446   17444 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9629/.minikube/ca.key ...
	I0920 20:48:31.234457   17444 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9629/.minikube/ca.key: {Name:mk71f92cfa9e4bd5c14ebb4a5cc927abfa94f207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:31.234526   17444 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9629/.minikube/proxy-client-ca.key
	I0920 20:48:31.369245   17444 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9629/.minikube/proxy-client-ca.crt ...
	I0920 20:48:31.369275   17444 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9629/.minikube/proxy-client-ca.crt: {Name:mkd5cc6017e848bfe096e1af229b394202ab1f71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:31.369419   17444 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9629/.minikube/proxy-client-ca.key ...
	I0920 20:48:31.369429   17444 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9629/.minikube/proxy-client-ca.key: {Name:mkc7539fe3bb87fe2e9486f0c86f00cd8410ce89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:31.369492   17444 certs.go:256] generating profile certs ...
	I0920 20:48:31.369541   17444 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.key
	I0920 20:48:31.369554   17444 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.crt with IP's: []
	I0920 20:48:31.502213   17444 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.crt ...
	I0920 20:48:31.502243   17444 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.crt: {Name:mke4d82450bb09af2a0aa752a9a54f4be1e7ae81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:31.502398   17444 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.key ...
	I0920 20:48:31.502408   17444 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.key: {Name:mkc97ee55fd305e439f7a0e5f06def268a69ad59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:31.502471   17444 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/apiserver.key.cbe51b0d
	I0920 20:48:31.502488   17444 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/apiserver.crt.cbe51b0d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.113]
	I0920 20:48:31.626279   17444 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/apiserver.crt.cbe51b0d ...
	I0920 20:48:31.626308   17444 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/apiserver.crt.cbe51b0d: {Name:mk9232a1dd79393fa5f56019cfebca5dd710c5b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:31.626458   17444 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/apiserver.key.cbe51b0d ...
	I0920 20:48:31.626471   17444 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/apiserver.key.cbe51b0d: {Name:mk31c926e50cc1b3339f3bfa82b77095e862dacc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:31.626534   17444 certs.go:381] copying /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/apiserver.crt.cbe51b0d -> /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/apiserver.crt
	I0920 20:48:31.626603   17444 certs.go:385] copying /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/apiserver.key.cbe51b0d -> /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/apiserver.key
	I0920 20:48:31.626651   17444 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/proxy-client.key
	I0920 20:48:31.626666   17444 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/proxy-client.crt with IP's: []
	I0920 20:48:31.802864   17444 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/proxy-client.crt ...
	I0920 20:48:31.802892   17444 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/proxy-client.crt: {Name:mk7c4af27cbf3132edd901df892341c66993f648 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:31.803044   17444 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/proxy-client.key ...
	I0920 20:48:31.803055   17444 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/proxy-client.key: {Name:mkf41161f772815c42b196ae51228c6ba950d2e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:31.803207   17444 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9629/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 20:48:31.803238   17444 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9629/.minikube/certs/ca.pem (1082 bytes)
	I0920 20:48:31.803262   17444 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9629/.minikube/certs/cert.pem (1123 bytes)
	I0920 20:48:31.803285   17444 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9629/.minikube/certs/key.pem (1679 bytes)
	I0920 20:48:31.803807   17444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9629/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 20:48:31.830596   17444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9629/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 20:48:31.852879   17444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9629/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 20:48:31.882198   17444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9629/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 20:48:31.906356   17444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 20:48:31.929521   17444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 20:48:31.953111   17444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 20:48:31.975986   17444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 20:48:32.000362   17444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9629/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 20:48:32.024199   17444 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 20:48:32.040068   17444 ssh_runner.go:195] Run: openssl version
	I0920 20:48:32.045892   17444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 20:48:32.055968   17444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 20:48:32.060250   17444 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 20:48:32.060299   17444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 20:48:32.065958   17444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 20:48:32.075998   17444 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 20:48:32.080017   17444 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 20:48:32.080059   17444 kubeadm.go:392] StartCluster: {Name:addons-022099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-022099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 20:48:32.080147   17444 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 20:48:32.094813   17444 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 20:48:32.104063   17444 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 20:48:32.112968   17444 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 20:48:32.122081   17444 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 20:48:32.122094   17444 kubeadm.go:157] found existing configuration files:
	
	I0920 20:48:32.122124   17444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 20:48:32.130404   17444 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 20:48:32.130439   17444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 20:48:32.139155   17444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 20:48:32.147673   17444 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 20:48:32.147711   17444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 20:48:32.156582   17444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 20:48:32.164911   17444 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 20:48:32.164955   17444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 20:48:32.173639   17444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 20:48:32.182002   17444 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 20:48:32.182058   17444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 20:48:32.191752   17444 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 20:48:32.239656   17444 kubeadm.go:310] W0920 20:48:32.218888    1516 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 20:48:32.240425   17444 kubeadm.go:310] W0920 20:48:32.219838    1516 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 20:48:32.339404   17444 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 20:48:42.680980   17444 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 20:48:42.681073   17444 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 20:48:42.681186   17444 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 20:48:42.681322   17444 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 20:48:42.681459   17444 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 20:48:42.681551   17444 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 20:48:42.682933   17444 out.go:235]   - Generating certificates and keys ...
	I0920 20:48:42.683028   17444 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 20:48:42.683099   17444 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 20:48:42.683173   17444 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 20:48:42.683219   17444 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 20:48:42.683272   17444 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 20:48:42.683313   17444 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 20:48:42.683355   17444 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 20:48:42.683510   17444 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-022099 localhost] and IPs [192.168.39.113 127.0.0.1 ::1]
	I0920 20:48:42.683573   17444 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 20:48:42.683695   17444 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-022099 localhost] and IPs [192.168.39.113 127.0.0.1 ::1]
	I0920 20:48:42.683776   17444 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 20:48:42.683865   17444 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 20:48:42.683933   17444 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 20:48:42.684017   17444 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 20:48:42.684102   17444 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 20:48:42.684175   17444 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 20:48:42.684226   17444 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 20:48:42.684305   17444 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 20:48:42.684382   17444 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 20:48:42.684527   17444 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 20:48:42.684633   17444 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 20:48:42.686117   17444 out.go:235]   - Booting up control plane ...
	I0920 20:48:42.686224   17444 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 20:48:42.686327   17444 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 20:48:42.686387   17444 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 20:48:42.686505   17444 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 20:48:42.686591   17444 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 20:48:42.686640   17444 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 20:48:42.686762   17444 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 20:48:42.686917   17444 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 20:48:42.686988   17444 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.243635ms
	I0920 20:48:42.687046   17444 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 20:48:42.687094   17444 kubeadm.go:310] [api-check] The API server is healthy after 5.502372363s
	I0920 20:48:42.687200   17444 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 20:48:42.687343   17444 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 20:48:42.687432   17444 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 20:48:42.687615   17444 kubeadm.go:310] [mark-control-plane] Marking the node addons-022099 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 20:48:42.687698   17444 kubeadm.go:310] [bootstrap-token] Using token: ehkx9u.xa1vul9odx5jafcz
	I0920 20:48:42.689696   17444 out.go:235]   - Configuring RBAC rules ...
	I0920 20:48:42.689831   17444 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 20:48:42.689926   17444 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 20:48:42.690065   17444 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 20:48:42.690217   17444 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 20:48:42.690373   17444 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 20:48:42.690490   17444 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 20:48:42.690665   17444 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 20:48:42.690731   17444 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 20:48:42.690769   17444 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 20:48:42.690775   17444 kubeadm.go:310] 
	I0920 20:48:42.690821   17444 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 20:48:42.690827   17444 kubeadm.go:310] 
	I0920 20:48:42.690897   17444 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 20:48:42.690906   17444 kubeadm.go:310] 
	I0920 20:48:42.690931   17444 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 20:48:42.690977   17444 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 20:48:42.691020   17444 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 20:48:42.691026   17444 kubeadm.go:310] 
	I0920 20:48:42.691074   17444 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 20:48:42.691081   17444 kubeadm.go:310] 
	I0920 20:48:42.691116   17444 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 20:48:42.691123   17444 kubeadm.go:310] 
	I0920 20:48:42.691166   17444 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 20:48:42.691224   17444 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 20:48:42.691281   17444 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 20:48:42.691287   17444 kubeadm.go:310] 
	I0920 20:48:42.691349   17444 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 20:48:42.691412   17444 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 20:48:42.691418   17444 kubeadm.go:310] 
	I0920 20:48:42.691524   17444 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ehkx9u.xa1vul9odx5jafcz \
	I0920 20:48:42.691656   17444 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:02c2941819f00d892857899517f0f5c5afbdc0d5cf94981c168abeb965ada3b4 \
	I0920 20:48:42.691699   17444 kubeadm.go:310] 	--control-plane 
	I0920 20:48:42.691708   17444 kubeadm.go:310] 
	I0920 20:48:42.691789   17444 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 20:48:42.691796   17444 kubeadm.go:310] 
	I0920 20:48:42.691857   17444 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ehkx9u.xa1vul9odx5jafcz \
	I0920 20:48:42.691956   17444 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:02c2941819f00d892857899517f0f5c5afbdc0d5cf94981c168abeb965ada3b4 
	I0920 20:48:42.691967   17444 cni.go:84] Creating CNI manager for ""
	I0920 20:48:42.691978   17444 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 20:48:42.693252   17444 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 20:48:42.694370   17444 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 20:48:42.705100   17444 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 20:48:42.722731   17444 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 20:48:42.722830   17444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:42.722872   17444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-022099 minikube.k8s.io/updated_at=2024_09_20T20_48_42_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f minikube.k8s.io/name=addons-022099 minikube.k8s.io/primary=true
	I0920 20:48:42.734214   17444 ops.go:34] apiserver oom_adj: -16
	I0920 20:48:42.852736   17444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:43.353453   17444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:43.853262   17444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:44.352824   17444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:44.852977   17444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:45.353664   17444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:45.853048   17444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:46.353098   17444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:46.441546   17444 kubeadm.go:1113] duration metric: took 3.718777168s to wait for elevateKubeSystemPrivileges
	I0920 20:48:46.441588   17444 kubeadm.go:394] duration metric: took 14.361532691s to StartCluster
	I0920 20:48:46.441609   17444 settings.go:142] acquiring lock: {Name:mk901b1e0cbe250f710830c8a397be4e67da2b26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:46.441740   17444 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-9629/kubeconfig
	I0920 20:48:46.442307   17444 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9629/kubeconfig: {Name:mk623ed22075d88849b8931fde4f60e21cbf98ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:46.442559   17444 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 20:48:46.442578   17444 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.113 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 20:48:46.442648   17444 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 20:48:46.442765   17444 addons.go:69] Setting yakd=true in profile "addons-022099"
	I0920 20:48:46.442778   17444 addons.go:69] Setting cloud-spanner=true in profile "addons-022099"
	I0920 20:48:46.442794   17444 addons.go:69] Setting metrics-server=true in profile "addons-022099"
	I0920 20:48:46.442826   17444 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-022099"
	I0920 20:48:46.442835   17444 addons.go:69] Setting ingress-dns=true in profile "addons-022099"
	I0920 20:48:46.442841   17444 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-022099"
	I0920 20:48:46.442848   17444 addons.go:234] Setting addon metrics-server=true in "addons-022099"
	I0920 20:48:46.442850   17444 config.go:182] Loaded profile config "addons-022099": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 20:48:46.442847   17444 addons.go:69] Setting gcp-auth=true in profile "addons-022099"
	I0920 20:48:46.442853   17444 addons.go:69] Setting storage-provisioner=true in profile "addons-022099"
	I0920 20:48:46.442876   17444 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-022099"
	I0920 20:48:46.442885   17444 host.go:66] Checking if "addons-022099" exists ...
	I0920 20:48:46.442887   17444 mustload.go:65] Loading cluster: addons-022099
	I0920 20:48:46.442853   17444 addons.go:234] Setting addon ingress-dns=true in "addons-022099"
	I0920 20:48:46.442907   17444 addons.go:234] Setting addon storage-provisioner=true in "addons-022099"
	I0920 20:48:46.442911   17444 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-022099"
	I0920 20:48:46.442926   17444 host.go:66] Checking if "addons-022099" exists ...
	I0920 20:48:46.442936   17444 host.go:66] Checking if "addons-022099" exists ...
	I0920 20:48:46.442939   17444 host.go:66] Checking if "addons-022099" exists ...
	I0920 20:48:46.443085   17444 config.go:182] Loaded profile config "addons-022099": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 20:48:46.443305   17444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 20:48:46.443308   17444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 20:48:46.443332   17444 addons.go:69] Setting inspektor-gadget=true in profile "addons-022099"
	I0920 20:48:46.443339   17444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:46.443343   17444 addons.go:234] Setting addon inspektor-gadget=true in "addons-022099"
	I0920 20:48:46.443341   17444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 20:48:46.443355   17444 addons.go:69] Setting registry=true in profile "addons-022099"
	I0920 20:48:46.443362   17444 host.go:66] Checking if "addons-022099" exists ...
	I0920 20:48:46.443366   17444 addons.go:234] Setting addon registry=true in "addons-022099"
	I0920 20:48:46.443374   17444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:46.443384   17444 host.go:66] Checking if "addons-022099" exists ...
	I0920 20:48:46.443393   17444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 20:48:46.442863   17444 addons.go:69] Setting ingress=true in profile "addons-022099"
	I0920 20:48:46.442817   17444 addons.go:234] Setting addon yakd=true in "addons-022099"
	I0920 20:48:46.442820   17444 addons.go:234] Setting addon cloud-spanner=true in "addons-022099"
	I0920 20:48:46.443342   17444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 20:48:46.443433   17444 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-022099"
	I0920 20:48:46.443441   17444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:46.443450   17444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:46.443477   17444 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-022099"
	I0920 20:48:46.443420   17444 addons.go:234] Setting addon ingress=true in "addons-022099"
	I0920 20:48:46.443493   17444 addons.go:69] Setting default-storageclass=true in profile "addons-022099"
	I0920 20:48:46.443500   17444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 20:48:46.443504   17444 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-022099"
	I0920 20:48:46.443517   17444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:46.443521   17444 host.go:66] Checking if "addons-022099" exists ...
	I0920 20:48:46.443570   17444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:46.443609   17444 addons.go:69] Setting volcano=true in profile "addons-022099"
	I0920 20:48:46.443623   17444 addons.go:234] Setting addon volcano=true in "addons-022099"
	I0920 20:48:46.443643   17444 addons.go:69] Setting volumesnapshots=true in profile "addons-022099"
	I0920 20:48:46.443651   17444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 20:48:46.443657   17444 addons.go:234] Setting addon volumesnapshots=true in "addons-022099"
	I0920 20:48:46.443679   17444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:46.443683   17444 host.go:66] Checking if "addons-022099" exists ...
	I0920 20:48:46.443707   17444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 20:48:46.443731   17444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:46.443821   17444 host.go:66] Checking if "addons-022099" exists ...
	I0920 20:48:46.443894   17444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 20:48:46.443927   17444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:46.444002   17444 host.go:66] Checking if "addons-022099" exists ...
	I0920 20:48:46.444191   17444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 20:48:46.444217   17444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:46.444341   17444 host.go:66] Checking if "addons-022099" exists ...
	I0920 20:48:46.444384   17444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 20:48:46.444411   17444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:46.444465   17444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 20:48:46.444502   17444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:46.444737   17444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 20:48:46.444766   17444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:46.444842   17444 host.go:66] Checking if "addons-022099" exists ...
	I0920 20:48:46.445420   17444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 20:48:46.445470   17444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:46.449130   17444 out.go:177] * Verifying Kubernetes components...
	I0920 20:48:46.450576   17444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 20:48:46.464828   17444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 20:48:46.464892   17444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:46.465593   17444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34735
	I0920 20:48:46.465776   17444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35985
	I0920 20:48:46.465862   17444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36567
	I0920 20:48:46.465933   17444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41243
	I0920 20:48:46.466222   17444 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:46.466371   17444 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:46.466451   17444 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:46.466924   17444 main.go:141] libmachine: Using API Version  1
	I0920 20:48:46.466940   17444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:46.467271   17444 main.go:141] libmachine: Using API Version  1
	I0920 20:48:46.467296   17444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:46.467376   17444 main.go:141] libmachine: Using API Version  1
	I0920 20:48:46.467387   17444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:46.467445   17444 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:46.467513   17444 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:46.467600   17444 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:46.468126   17444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 20:48:46.468169   17444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:46.468444   17444 main.go:141] libmachine: Using API Version  1
	I0920 20:48:46.468459   17444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:46.468572   17444 main.go:141] libmachine: (addons-022099) Calling .GetState
	I0920 20:48:46.469860   17444 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:46.469879   17444 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:46.470036   17444 main.go:141] libmachine: (addons-022099) Calling .GetState
	I0920 20:48:46.470572   17444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 20:48:46.470612   17444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:46.475428   17444 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-022099"
	I0920 20:48:46.475471   17444 host.go:66] Checking if "addons-022099" exists ...
	I0920 20:48:46.475958   17444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 20:48:46.476001   17444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:46.478287   17444 host.go:66] Checking if "addons-022099" exists ...
	I0920 20:48:46.479662   17444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 20:48:46.479743   17444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:46.484980   17444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39023
	I0920 20:48:46.485994   17444 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:46.486620   17444 main.go:141] libmachine: Using API Version  1
	I0920 20:48:46.486672   17444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:46.487072   17444 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:46.487697   17444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 20:48:46.487741   17444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:46.489544   17444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38247
	I0920 20:48:46.490094   17444 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:46.490560   17444 main.go:141] libmachine: Using API Version  1
	I0920 20:48:46.490585   17444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:46.490941   17444 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:46.491556   17444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 20:48:46.491592   17444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:46.501106   17444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38153
	I0920 20:48:46.501778   17444 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:46.502420   17444 main.go:141] libmachine: Using API Version  1
	I0920 20:48:46.502439   17444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:46.502814   17444 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:46.503541   17444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 20:48:46.503629   17444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:46.510874   17444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38073
	I0920 20:48:46.511452   17444 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:46.511956   17444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39031
	I0920 20:48:46.512552   17444 main.go:141] libmachine: Using API Version  1
	I0920 20:48:46.512570   17444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:46.512651   17444 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:46.513126   17444 main.go:141] libmachine: Using API Version  1
	I0920 20:48:46.513140   17444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:46.513547   17444 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:46.514189   17444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 20:48:46.514219   17444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:46.515291   17444 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:46.515770   17444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 20:48:46.515807   17444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:46.517703   17444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43755
	I0920 20:48:46.517884   17444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36297
	I0920 20:48:46.518241   17444 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:46.518341   17444 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:46.518824   17444 main.go:141] libmachine: Using API Version  1
	I0920 20:48:46.518852   17444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:46.518927   17444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37335
	I0920 20:48:46.519136   17444 main.go:141] libmachine: Using API Version  1
	I0920 20:48:46.519155   17444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:46.519520   17444 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:46.519848   17444 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:46.520049   17444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 20:48:46.520089   17444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:46.520323   17444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 20:48:46.520357   17444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:46.521427   17444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46639
	I0920 20:48:46.521436   17444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33429
	I0920 20:48:46.521737   17444 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:46.521987   17444 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:46.522339   17444 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:46.522354   17444 main.go:141] libmachine: Using API Version  1
	I0920 20:48:46.522435   17444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:46.522771   17444 main.go:141] libmachine: Using API Version  1
	I0920 20:48:46.522787   17444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:46.523742   17444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40105
	I0920 20:48:46.523809   17444 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:46.523865   17444 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:46.523984   17444 main.go:141] libmachine: Using API Version  1
	I0920 20:48:46.523995   17444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:46.524238   17444 main.go:141] libmachine: (addons-022099) Calling .GetState
	I0920 20:48:46.524541   17444 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:46.525846   17444 main.go:141] libmachine: (addons-022099) Calling .DriverName
	I0920 20:48:46.527736   17444 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 20:48:46.528052   17444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43257
	I0920 20:48:46.528813   17444 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 20:48:46.528832   17444 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 20:48:46.528852   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHHostname
	I0920 20:48:46.528964   17444 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:46.529378   17444 main.go:141] libmachine: Using API Version  1
	I0920 20:48:46.529399   17444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:46.529854   17444 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:46.531499   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:46.531805   17444 main.go:141] libmachine: (addons-022099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:f4:69", ip: ""} in network mk-addons-022099: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:07 +0000 UTC Type:0 Mac:52:54:00:74:f4:69 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-022099 Clientid:01:52:54:00:74:f4:69}
	I0920 20:48:46.531838   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined IP address 192.168.39.113 and MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:46.532029   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHPort
	I0920 20:48:46.532183   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHKeyPath
	I0920 20:48:46.532332   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHUsername
	I0920 20:48:46.532447   17444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37017
	I0920 20:48:46.532525   17444 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9629/.minikube/machines/addons-022099/id_rsa Username:docker}
	I0920 20:48:46.535462   17444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44749
	I0920 20:48:46.538037   17444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38187
	I0920 20:48:46.538367   17444 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:46.538899   17444 main.go:141] libmachine: Using API Version  1
	I0920 20:48:46.538922   17444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:46.539838   17444 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:46.540420   17444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 20:48:46.540462   17444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:46.542033   17444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39799
	I0920 20:48:46.542421   17444 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:46.542901   17444 main.go:141] libmachine: Using API Version  1
	I0920 20:48:46.542925   17444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:46.543071   17444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45601
	I0920 20:48:46.543442   17444 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:46.543696   17444 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:46.544166   17444 main.go:141] libmachine: Using API Version  1
	I0920 20:48:46.544189   17444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:46.544508   17444 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:46.544635   17444 main.go:141] libmachine: (addons-022099) Calling .DriverName
	I0920 20:48:46.546421   17444 main.go:141] libmachine: (addons-022099) Calling .GetState
	I0920 20:48:46.548529   17444 main.go:141] libmachine: (addons-022099) Calling .DriverName
	I0920 20:48:46.548958   17444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 20:48:46.548993   17444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:46.549038   17444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 20:48:46.549045   17444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 20:48:46.549071   17444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:46.549318   17444 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:46.549412   17444 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:46.549454   17444 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:46.549075   17444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:46.549912   17444 main.go:141] libmachine: Using API Version  1
	I0920 20:48:46.549931   17444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:46.549912   17444 main.go:141] libmachine: Using API Version  1
	I0920 20:48:46.549958   17444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:46.550450   17444 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:46.550624   17444 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 20:48:46.550664   17444 main.go:141] libmachine: (addons-022099) Calling .GetState
	I0920 20:48:46.550693   17444 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:46.550883   17444 main.go:141] libmachine: (addons-022099) Calling .GetState
	I0920 20:48:46.551793   17444 main.go:141] libmachine: Using API Version  1
	I0920 20:48:46.551800   17444 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 20:48:46.551814   17444 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 20:48:46.551833   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHHostname
	I0920 20:48:46.551868   17444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:46.552999   17444 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:46.553174   17444 main.go:141] libmachine: (addons-022099) Calling .DriverName
	I0920 20:48:46.553273   17444 main.go:141] libmachine: (addons-022099) Calling .GetState
	I0920 20:48:46.553567   17444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41395
	I0920 20:48:46.554355   17444 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:46.554944   17444 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 20:48:46.555067   17444 main.go:141] libmachine: Using API Version  1
	I0920 20:48:46.555083   17444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:46.555606   17444 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:46.555676   17444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40015
	I0920 20:48:46.555899   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:46.556238   17444 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 20:48:46.556255   17444 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 20:48:46.556272   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHHostname
	I0920 20:48:46.556379   17444 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:46.556456   17444 main.go:141] libmachine: (addons-022099) Calling .GetState
	I0920 20:48:46.557605   17444 main.go:141] libmachine: (addons-022099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:f4:69", ip: ""} in network mk-addons-022099: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:07 +0000 UTC Type:0 Mac:52:54:00:74:f4:69 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-022099 Clientid:01:52:54:00:74:f4:69}
	I0920 20:48:46.557855   17444 main.go:141] libmachine: Using API Version  1
	I0920 20:48:46.557870   17444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:46.557985   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined IP address 192.168.39.113 and MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:46.558236   17444 main.go:141] libmachine: (addons-022099) Calling .DriverName
	I0920 20:48:46.558436   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHPort
	I0920 20:48:46.558605   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHKeyPath
	I0920 20:48:46.558658   17444 main.go:141] libmachine: (addons-022099) Calling .DriverName
	I0920 20:48:46.558696   17444 addons.go:234] Setting addon default-storageclass=true in "addons-022099"
	I0920 20:48:46.558731   17444 host.go:66] Checking if "addons-022099" exists ...
	I0920 20:48:46.558954   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHUsername
	I0920 20:48:46.559199   17444 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9629/.minikube/machines/addons-022099/id_rsa Username:docker}
	I0920 20:48:46.559334   17444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 20:48:46.559367   17444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:46.559739   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:46.559846   17444 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:46.560139   17444 main.go:141] libmachine: (addons-022099) Calling .GetState
	I0920 20:48:46.560231   17444 main.go:141] libmachine: (addons-022099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:f4:69", ip: ""} in network mk-addons-022099: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:07 +0000 UTC Type:0 Mac:52:54:00:74:f4:69 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-022099 Clientid:01:52:54:00:74:f4:69}
	I0920 20:48:46.560248   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined IP address 192.168.39.113 and MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:46.560411   17444 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 20:48:46.560412   17444 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0920 20:48:46.560593   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHPort
	I0920 20:48:46.560759   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHKeyPath
	I0920 20:48:46.560892   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHUsername
	I0920 20:48:46.561131   17444 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9629/.minikube/machines/addons-022099/id_rsa Username:docker}
	I0920 20:48:46.561725   17444 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 20:48:46.561741   17444 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 20:48:46.561757   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHHostname
	I0920 20:48:46.562272   17444 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 20:48:46.562285   17444 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 20:48:46.562300   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHHostname
	I0920 20:48:46.562884   17444 main.go:141] libmachine: (addons-022099) Calling .DriverName
	I0920 20:48:46.564363   17444 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 20:48:46.565599   17444 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0920 20:48:46.566408   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:46.567305   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:46.567645   17444 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 20:48:46.568979   17444 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 20:48:46.568998   17444 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0920 20:48:46.569015   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHHostname
	I0920 20:48:46.569116   17444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37747
	I0920 20:48:46.569125   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHPort
	I0920 20:48:46.569138   17444 main.go:141] libmachine: (addons-022099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:f4:69", ip: ""} in network mk-addons-022099: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:07 +0000 UTC Type:0 Mac:52:54:00:74:f4:69 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-022099 Clientid:01:52:54:00:74:f4:69}
	I0920 20:48:46.569156   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined IP address 192.168.39.113 and MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:46.569187   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHPort
	I0920 20:48:46.569215   17444 main.go:141] libmachine: (addons-022099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:f4:69", ip: ""} in network mk-addons-022099: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:07 +0000 UTC Type:0 Mac:52:54:00:74:f4:69 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-022099 Clientid:01:52:54:00:74:f4:69}
	I0920 20:48:46.569232   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined IP address 192.168.39.113 and MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:46.569363   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHKeyPath
	I0920 20:48:46.569472   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHUsername
	I0920 20:48:46.569564   17444 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9629/.minikube/machines/addons-022099/id_rsa Username:docker}
	I0920 20:48:46.570662   17444 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:46.571208   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHKeyPath
	I0920 20:48:46.571384   17444 main.go:141] libmachine: Using API Version  1
	I0920 20:48:46.571403   17444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:46.571476   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHUsername
	I0920 20:48:46.571658   17444 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9629/.minikube/machines/addons-022099/id_rsa Username:docker}
	I0920 20:48:46.571907   17444 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:46.572105   17444 main.go:141] libmachine: (addons-022099) Calling .GetState
	I0920 20:48:46.573987   17444 main.go:141] libmachine: (addons-022099) Calling .DriverName
	I0920 20:48:46.575209   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:46.575575   17444 main.go:141] libmachine: (addons-022099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:f4:69", ip: ""} in network mk-addons-022099: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:07 +0000 UTC Type:0 Mac:52:54:00:74:f4:69 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-022099 Clientid:01:52:54:00:74:f4:69}
	I0920 20:48:46.575605   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined IP address 192.168.39.113 and MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:46.575807   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHPort
	I0920 20:48:46.575885   17444 out.go:177]   - Using image docker.io/busybox:stable
	I0920 20:48:46.576001   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHKeyPath
	I0920 20:48:46.576167   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHUsername
	I0920 20:48:46.576304   17444 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9629/.minikube/machines/addons-022099/id_rsa Username:docker}
	I0920 20:48:46.577058   17444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33347
	I0920 20:48:46.577595   17444 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:46.578111   17444 main.go:141] libmachine: Using API Version  1
	I0920 20:48:46.578129   17444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:46.578200   17444 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 20:48:46.578644   17444 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:46.578841   17444 main.go:141] libmachine: (addons-022099) Calling .GetState
	I0920 20:48:46.579718   17444 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 20:48:46.579734   17444 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 20:48:46.579750   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHHostname
	I0920 20:48:46.580419   17444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36791
	I0920 20:48:46.580601   17444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38143
	I0920 20:48:46.580870   17444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33075
	I0920 20:48:46.580987   17444 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:46.581101   17444 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:46.581205   17444 main.go:141] libmachine: (addons-022099) Calling .DriverName
	I0920 20:48:46.581913   17444 main.go:141] libmachine: Using API Version  1
	I0920 20:48:46.581931   17444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:46.582587   17444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40033
	I0920 20:48:46.582587   17444 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:46.582903   17444 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:46.582912   17444 main.go:141] libmachine: (addons-022099) Calling .GetState
	I0920 20:48:46.583271   17444 main.go:141] libmachine: Using API Version  1
	I0920 20:48:46.583294   17444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:46.583409   17444 main.go:141] libmachine: Using API Version  1
	I0920 20:48:46.583466   17444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:46.583554   17444 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:46.583939   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:46.583967   17444 main.go:141] libmachine: (addons-022099) Calling .GetState
	I0920 20:48:46.584028   17444 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:46.584216   17444 main.go:141] libmachine: (addons-022099) Calling .GetState
	I0920 20:48:46.584640   17444 main.go:141] libmachine: (addons-022099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:f4:69", ip: ""} in network mk-addons-022099: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:07 +0000 UTC Type:0 Mac:52:54:00:74:f4:69 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-022099 Clientid:01:52:54:00:74:f4:69}
	I0920 20:48:46.584670   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined IP address 192.168.39.113 and MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:46.584777   17444 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 20:48:46.584789   17444 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:46.585237   17444 main.go:141] libmachine: (addons-022099) Calling .DriverName
	I0920 20:48:46.585528   17444 main.go:141] libmachine: Using API Version  1
	I0920 20:48:46.585548   17444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:46.585564   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHPort
	I0920 20:48:46.585737   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHKeyPath
	I0920 20:48:46.585949   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHUsername
	I0920 20:48:46.586297   17444 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 20:48:46.586308   17444 main.go:141] libmachine: (addons-022099) Calling .DriverName
	I0920 20:48:46.586318   17444 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 20:48:46.586339   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHHostname
	I0920 20:48:46.586363   17444 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9629/.minikube/machines/addons-022099/id_rsa Username:docker}
	I0920 20:48:46.586943   17444 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:46.587189   17444 main.go:141] libmachine: (addons-022099) Calling .DriverName
	I0920 20:48:46.587221   17444 main.go:141] libmachine: (addons-022099) Calling .GetState
	I0920 20:48:46.587341   17444 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0920 20:48:46.588248   17444 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 20:48:46.589010   17444 main.go:141] libmachine: (addons-022099) Calling .DriverName
	I0920 20:48:46.589056   17444 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 20:48:46.589174   17444 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 20:48:46.589439   17444 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0920 20:48:46.589455   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHHostname
	I0920 20:48:46.589553   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:46.589844   17444 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 20:48:46.589863   17444 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 20:48:46.589879   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHHostname
	I0920 20:48:46.590107   17444 main.go:141] libmachine: (addons-022099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:f4:69", ip: ""} in network mk-addons-022099: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:07 +0000 UTC Type:0 Mac:52:54:00:74:f4:69 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-022099 Clientid:01:52:54:00:74:f4:69}
	I0920 20:48:46.590126   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined IP address 192.168.39.113 and MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:46.590197   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHPort
	I0920 20:48:46.590355   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHKeyPath
	I0920 20:48:46.590494   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHUsername
	I0920 20:48:46.590729   17444 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9629/.minikube/machines/addons-022099/id_rsa Username:docker}
	I0920 20:48:46.590892   17444 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0920 20:48:46.592233   17444 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 20:48:46.592292   17444 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0920 20:48:46.592820   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:46.593177   17444 main.go:141] libmachine: (addons-022099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:f4:69", ip: ""} in network mk-addons-022099: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:07 +0000 UTC Type:0 Mac:52:54:00:74:f4:69 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-022099 Clientid:01:52:54:00:74:f4:69}
	I0920 20:48:46.593265   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined IP address 192.168.39.113 and MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:46.593364   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHPort
	I0920 20:48:46.593397   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:46.593512   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHKeyPath
	I0920 20:48:46.593748   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHUsername
	I0920 20:48:46.593804   17444 main.go:141] libmachine: (addons-022099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:f4:69", ip: ""} in network mk-addons-022099: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:07 +0000 UTC Type:0 Mac:52:54:00:74:f4:69 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-022099 Clientid:01:52:54:00:74:f4:69}
	I0920 20:48:46.593929   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined IP address 192.168.39.113 and MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:46.593961   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHPort
	I0920 20:48:46.593988   17444 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9629/.minikube/machines/addons-022099/id_rsa Username:docker}
	I0920 20:48:46.594144   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHKeyPath
	I0920 20:48:46.594224   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHUsername
	I0920 20:48:46.594320   17444 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9629/.minikube/machines/addons-022099/id_rsa Username:docker}
	I0920 20:48:46.594372   17444 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 20:48:46.594381   17444 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 20:48:46.594394   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHHostname
	I0920 20:48:46.595154   17444 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0920 20:48:46.597479   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:46.597677   17444 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 20:48:46.597700   17444 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0920 20:48:46.597708   17444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34847
	I0920 20:48:46.597716   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHHostname
	I0920 20:48:46.597668   17444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45179
	I0920 20:48:46.597911   17444 main.go:141] libmachine: (addons-022099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:f4:69", ip: ""} in network mk-addons-022099: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:07 +0000 UTC Type:0 Mac:52:54:00:74:f4:69 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-022099 Clientid:01:52:54:00:74:f4:69}
	I0920 20:48:46.597938   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined IP address 192.168.39.113 and MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:46.598083   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHPort
	I0920 20:48:46.598162   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHKeyPath
	I0920 20:48:46.598217   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHUsername
	I0920 20:48:46.598306   17444 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9629/.minikube/machines/addons-022099/id_rsa Username:docker}
	I0920 20:48:46.598728   17444 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:46.598949   17444 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:46.599215   17444 main.go:141] libmachine: Using API Version  1
	I0920 20:48:46.599226   17444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:46.599320   17444 main.go:141] libmachine: Using API Version  1
	I0920 20:48:46.599330   17444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:46.599512   17444 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:46.599676   17444 main.go:141] libmachine: (addons-022099) Calling .GetState
	I0920 20:48:46.599803   17444 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:46.600341   17444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 20:48:46.600378   17444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:46.601183   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:46.601535   17444 main.go:141] libmachine: (addons-022099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:f4:69", ip: ""} in network mk-addons-022099: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:07 +0000 UTC Type:0 Mac:52:54:00:74:f4:69 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-022099 Clientid:01:52:54:00:74:f4:69}
	I0920 20:48:46.601554   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined IP address 192.168.39.113 and MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:46.601693   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHPort
	I0920 20:48:46.601831   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHKeyPath
	I0920 20:48:46.601883   17444 main.go:141] libmachine: (addons-022099) Calling .DriverName
	I0920 20:48:46.601990   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHUsername
	I0920 20:48:46.602101   17444 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9629/.minikube/machines/addons-022099/id_rsa Username:docker}
	I0920 20:48:46.603495   17444 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 20:48:46.604706   17444 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 20:48:46.605621   17444 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 20:48:46.606453   17444 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0920 20:48:46.607735   17444 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 20:48:46.608700   17444 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 20:48:46.609717   17444 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 20:48:46.610745   17444 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 20:48:46.611633   17444 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 20:48:46.611645   17444 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 20:48:46.611658   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHHostname
	I0920 20:48:46.614093   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:46.614424   17444 main.go:141] libmachine: (addons-022099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:f4:69", ip: ""} in network mk-addons-022099: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:07 +0000 UTC Type:0 Mac:52:54:00:74:f4:69 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-022099 Clientid:01:52:54:00:74:f4:69}
	I0920 20:48:46.614457   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined IP address 192.168.39.113 and MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:46.614615   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHPort
	I0920 20:48:46.614792   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHKeyPath
	I0920 20:48:46.614933   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHUsername
	I0920 20:48:46.615049   17444 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9629/.minikube/machines/addons-022099/id_rsa Username:docker}
	I0920 20:48:46.616178   17444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41841
	I0920 20:48:46.616968   17444 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:46.617354   17444 main.go:141] libmachine: Using API Version  1
	I0920 20:48:46.617379   17444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:46.617644   17444 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:46.617778   17444 main.go:141] libmachine: (addons-022099) Calling .GetState
	I0920 20:48:46.618778   17444 main.go:141] libmachine: (addons-022099) Calling .DriverName
	I0920 20:48:46.618939   17444 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 20:48:46.618949   17444 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 20:48:46.618960   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHHostname
	I0920 20:48:46.621226   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:46.621508   17444 main.go:141] libmachine: (addons-022099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:f4:69", ip: ""} in network mk-addons-022099: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:07 +0000 UTC Type:0 Mac:52:54:00:74:f4:69 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-022099 Clientid:01:52:54:00:74:f4:69}
	I0920 20:48:46.621533   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined IP address 192.168.39.113 and MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:46.621731   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHPort
	I0920 20:48:46.621879   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHKeyPath
	I0920 20:48:46.622012   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHUsername
	I0920 20:48:46.622126   17444 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9629/.minikube/machines/addons-022099/id_rsa Username:docker}
	W0920 20:48:46.625209   17444 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:51932->192.168.39.113:22: read: connection reset by peer
	I0920 20:48:46.625230   17444 retry.go:31] will retry after 166.482571ms: ssh: handshake failed: read tcp 192.168.39.1:51932->192.168.39.113:22: read: connection reset by peer
	I0920 20:48:46.839813   17444 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 20:48:46.932598   17444 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 20:48:46.932648   17444 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 20:48:46.938721   17444 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 20:48:46.938741   17444 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 20:48:46.967615   17444 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 20:48:46.986442   17444 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 20:48:47.000271   17444 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 20:48:47.001229   17444 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 20:48:47.013655   17444 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 20:48:47.013673   17444 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 20:48:47.134348   17444 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 20:48:47.134379   17444 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 20:48:47.135482   17444 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 20:48:47.135499   17444 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 20:48:47.149123   17444 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 20:48:47.149141   17444 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 20:48:47.150869   17444 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 20:48:47.246157   17444 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 20:48:47.267963   17444 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 20:48:47.267986   17444 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 20:48:47.283788   17444 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 20:48:47.283813   17444 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 20:48:47.295913   17444 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 20:48:47.295934   17444 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 20:48:47.296989   17444 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 20:48:47.297798   17444 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 20:48:47.297819   17444 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 20:48:47.410623   17444 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 20:48:47.410652   17444 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 20:48:47.462094   17444 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 20:48:47.462123   17444 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 20:48:47.470491   17444 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 20:48:47.470511   17444 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 20:48:47.725118   17444 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 20:48:47.725139   17444 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 20:48:47.731396   17444 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 20:48:47.731420   17444 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 20:48:47.743521   17444 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 20:48:47.743542   17444 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 20:48:47.767512   17444 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 20:48:47.785950   17444 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 20:48:47.785977   17444 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 20:48:47.994192   17444 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 20:48:47.994225   17444 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 20:48:48.145025   17444 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 20:48:48.145050   17444 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 20:48:48.336167   17444 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 20:48:48.336189   17444 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 20:48:48.380874   17444 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 20:48:48.380907   17444 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 20:48:48.410105   17444 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 20:48:48.410128   17444 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 20:48:48.628457   17444 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 20:48:48.751005   17444 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 20:48:48.751027   17444 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 20:48:48.796920   17444 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 20:48:48.796942   17444 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 20:48:48.949303   17444 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 20:48:48.949330   17444 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 20:48:49.068955   17444 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 20:48:49.175173   17444 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 20:48:49.175199   17444 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 20:48:49.187040   17444 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 20:48:49.187063   17444 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 20:48:49.227661   17444 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 20:48:49.281148   17444 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 20:48:49.281175   17444 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 20:48:49.323815   17444 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.483956393s)
	I0920 20:48:49.323835   17444 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.391206498s)
	I0920 20:48:49.323870   17444 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:49.323883   17444 main.go:141] libmachine: (addons-022099) Calling .Close
	I0920 20:48:49.324147   17444 main.go:141] libmachine: (addons-022099) DBG | Closing plugin on server side
	I0920 20:48:49.324221   17444 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:49.324233   17444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:49.324246   17444 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:49.324253   17444 main.go:141] libmachine: (addons-022099) Calling .Close
	I0920 20:48:49.324559   17444 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:49.324576   17444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:49.324973   17444 node_ready.go:35] waiting up to 6m0s for node "addons-022099" to be "Ready" ...
	I0920 20:48:49.328792   17444 node_ready.go:49] node "addons-022099" has status "Ready":"True"
	I0920 20:48:49.328814   17444 node_ready.go:38] duration metric: took 3.810624ms for node "addons-022099" to be "Ready" ...
	I0920 20:48:49.328824   17444 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 20:48:49.338658   17444 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5sk8c" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:49.508687   17444 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 20:48:49.508714   17444 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 20:48:49.583557   17444 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 20:48:49.583583   17444 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 20:48:49.818218   17444 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 20:48:49.934297   17444 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 20:48:49.934332   17444 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 20:48:50.026164   17444 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.093469402s)
	I0920 20:48:50.026208   17444 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0920 20:48:50.341591   17444 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.373939754s)
	I0920 20:48:50.341663   17444 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.341677   17444 main.go:141] libmachine: (addons-022099) Calling .Close
	I0920 20:48:50.341950   17444 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.341993   17444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.342009   17444 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.342017   17444 main.go:141] libmachine: (addons-022099) Calling .Close
	I0920 20:48:50.342243   17444 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.342282   17444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.456011   17444 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 20:48:50.456034   17444 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 20:48:50.530234   17444 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-022099" context rescaled to 1 replicas
	I0920 20:48:50.655346   17444 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 20:48:51.369735   17444 pod_ready.go:103] pod "coredns-7c65d6cfc9-5sk8c" in "kube-system" namespace has status "Ready":"False"
	I0920 20:48:53.384888   17444 pod_ready.go:103] pod "coredns-7c65d6cfc9-5sk8c" in "kube-system" namespace has status "Ready":"False"
	I0920 20:48:53.623366   17444 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 20:48:53.623406   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHHostname
	I0920 20:48:53.626608   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:53.627031   17444 main.go:141] libmachine: (addons-022099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:f4:69", ip: ""} in network mk-addons-022099: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:07 +0000 UTC Type:0 Mac:52:54:00:74:f4:69 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-022099 Clientid:01:52:54:00:74:f4:69}
	I0920 20:48:53.627058   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined IP address 192.168.39.113 and MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:53.627308   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHPort
	I0920 20:48:53.627513   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHKeyPath
	I0920 20:48:53.627694   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHUsername
	I0920 20:48:53.627834   17444 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9629/.minikube/machines/addons-022099/id_rsa Username:docker}
	I0920 20:48:54.713906   17444 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 20:48:54.872064   17444 addons.go:234] Setting addon gcp-auth=true in "addons-022099"
	I0920 20:48:54.872122   17444 host.go:66] Checking if "addons-022099" exists ...
	I0920 20:48:54.872667   17444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 20:48:54.872738   17444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:54.888311   17444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44771
	I0920 20:48:54.888809   17444 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:54.889361   17444 main.go:141] libmachine: Using API Version  1
	I0920 20:48:54.889384   17444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:54.889760   17444 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:54.890227   17444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 20:48:54.890265   17444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:54.906886   17444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39257
	I0920 20:48:54.907414   17444 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:54.907939   17444 main.go:141] libmachine: Using API Version  1
	I0920 20:48:54.907966   17444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:54.908334   17444 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:54.908543   17444 main.go:141] libmachine: (addons-022099) Calling .GetState
	I0920 20:48:54.910286   17444 main.go:141] libmachine: (addons-022099) Calling .DriverName
	I0920 20:48:54.910520   17444 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 20:48:54.910542   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHHostname
	I0920 20:48:54.913517   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:54.913962   17444 main.go:141] libmachine: (addons-022099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:f4:69", ip: ""} in network mk-addons-022099: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:07 +0000 UTC Type:0 Mac:52:54:00:74:f4:69 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-022099 Clientid:01:52:54:00:74:f4:69}
	I0920 20:48:54.913986   17444 main.go:141] libmachine: (addons-022099) DBG | domain addons-022099 has defined IP address 192.168.39.113 and MAC address 52:54:00:74:f4:69 in network mk-addons-022099
	I0920 20:48:54.914140   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHPort
	I0920 20:48:54.914306   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHKeyPath
	I0920 20:48:54.914470   17444 main.go:141] libmachine: (addons-022099) Calling .GetSSHUsername
	I0920 20:48:54.914601   17444 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9629/.minikube/machines/addons-022099/id_rsa Username:docker}
	I0920 20:48:55.894604   17444 pod_ready.go:103] pod "coredns-7c65d6cfc9-5sk8c" in "kube-system" namespace has status "Ready":"False"
	I0920 20:48:56.774886   17444 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.788391967s)
	I0920 20:48:56.774948   17444 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:56.774963   17444 main.go:141] libmachine: (addons-022099) Calling .Close
	I0920 20:48:56.774958   17444 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.774657735s)
	I0920 20:48:56.774995   17444 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:56.775013   17444 main.go:141] libmachine: (addons-022099) Calling .Close
	I0920 20:48:56.775082   17444 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.773829805s)
	I0920 20:48:56.775113   17444 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:56.775124   17444 main.go:141] libmachine: (addons-022099) Calling .Close
	I0920 20:48:56.775292   17444 main.go:141] libmachine: (addons-022099) DBG | Closing plugin on server side
	I0920 20:48:56.775297   17444 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:56.775314   17444 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:56.775315   17444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:56.775323   17444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:56.775323   17444 main.go:141] libmachine: (addons-022099) DBG | Closing plugin on server side
	I0920 20:48:56.775332   17444 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:56.775326   17444 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:56.775346   17444 main.go:141] libmachine: (addons-022099) Calling .Close
	I0920 20:48:56.775355   17444 main.go:141] libmachine: (addons-022099) Calling .Close
	I0920 20:48:56.775394   17444 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:56.775404   17444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:56.775406   17444 main.go:141] libmachine: (addons-022099) DBG | Closing plugin on server side
	I0920 20:48:56.775411   17444 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:56.775417   17444 main.go:141] libmachine: (addons-022099) Calling .Close
	I0920 20:48:56.775524   17444 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:56.775532   17444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:56.776662   17444 main.go:141] libmachine: (addons-022099) DBG | Closing plugin on server side
	I0920 20:48:56.776677   17444 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:56.776699   17444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:56.776701   17444 main.go:141] libmachine: (addons-022099) DBG | Closing plugin on server side
	I0920 20:48:56.776709   17444 addons.go:475] Verifying addon ingress=true in "addons-022099"
	I0920 20:48:56.776735   17444 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:56.776743   17444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:56.778326   17444 out.go:177] * Verifying ingress addon...
	I0920 20:48:56.780147   17444 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0920 20:48:56.800820   17444 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0920 20:48:56.800842   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:56.823310   17444 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:56.823331   17444 main.go:141] libmachine: (addons-022099) Calling .Close
	I0920 20:48:56.823615   17444 main.go:141] libmachine: (addons-022099) DBG | Closing plugin on server side
	I0920 20:48:56.823681   17444 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:56.823705   17444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:57.291844   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:57.909800   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:57.933281   17444 pod_ready.go:103] pod "coredns-7c65d6cfc9-5sk8c" in "kube-system" namespace has status "Ready":"False"
	I0920 20:48:58.331194   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:58.833553   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:59.377260   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:59.505054   17444 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (12.354149556s)
	I0920 20:48:59.505094   17444 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:59.505107   17444 main.go:141] libmachine: (addons-022099) Calling .Close
	I0920 20:48:59.505171   17444 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (12.258978068s)
	I0920 20:48:59.505202   17444 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (12.208190909s)
	I0920 20:48:59.505215   17444 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:59.505229   17444 main.go:141] libmachine: (addons-022099) Calling .Close
	I0920 20:48:59.505230   17444 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:59.505243   17444 main.go:141] libmachine: (addons-022099) Calling .Close
	I0920 20:48:59.505245   17444 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.737706458s)
	I0920 20:48:59.505274   17444 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:59.505289   17444 main.go:141] libmachine: (addons-022099) Calling .Close
	I0920 20:48:59.505363   17444 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.876875889s)
	I0920 20:48:59.505389   17444 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:59.505401   17444 main.go:141] libmachine: (addons-022099) Calling .Close
	I0920 20:48:59.505404   17444 main.go:141] libmachine: (addons-022099) DBG | Closing plugin on server side
	I0920 20:48:59.505417   17444 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:59.505428   17444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:59.505419   17444 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.436426775s)
	I0920 20:48:59.505442   17444 main.go:141] libmachine: (addons-022099) DBG | Closing plugin on server side
	I0920 20:48:59.505447   17444 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:59.505453   17444 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:59.505461   17444 main.go:141] libmachine: (addons-022099) Calling .Close
	I0920 20:48:59.505454   17444 main.go:141] libmachine: (addons-022099) Calling .Close
	I0920 20:48:59.505497   17444 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:59.505508   17444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:59.505516   17444 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:59.505532   17444 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:59.505532   17444 main.go:141] libmachine: (addons-022099) Calling .Close
	I0920 20:48:59.505539   17444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:59.505547   17444 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:59.505553   17444 main.go:141] libmachine: (addons-022099) Calling .Close
	I0920 20:48:59.505557   17444 main.go:141] libmachine: (addons-022099) DBG | Closing plugin on server side
	I0920 20:48:59.505599   17444 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.277898545s)
	W0920 20:48:59.505632   17444 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 20:48:59.505656   17444 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (9.687398413s)
	I0920 20:48:59.505676   17444 retry.go:31] will retry after 141.536527ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 20:48:59.505692   17444 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:59.505705   17444 main.go:141] libmachine: (addons-022099) Calling .Close
	I0920 20:48:59.505821   17444 main.go:141] libmachine: (addons-022099) DBG | Closing plugin on server side
	I0920 20:48:59.507377   17444 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:59.507393   17444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:59.507404   17444 addons.go:475] Verifying addon registry=true in "addons-022099"
	I0920 20:48:59.507775   17444 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:59.507794   17444 main.go:141] libmachine: (addons-022099) DBG | Closing plugin on server side
	I0920 20:48:59.507795   17444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:59.507814   17444 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:59.507816   17444 main.go:141] libmachine: (addons-022099) DBG | Closing plugin on server side
	I0920 20:48:59.507821   17444 main.go:141] libmachine: (addons-022099) Calling .Close
	I0920 20:48:59.507834   17444 main.go:141] libmachine: (addons-022099) DBG | Closing plugin on server side
	I0920 20:48:59.507862   17444 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:59.507874   17444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:59.507888   17444 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:59.507894   17444 main.go:141] libmachine: (addons-022099) Calling .Close
	I0920 20:48:59.507958   17444 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:59.507971   17444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:59.507978   17444 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:59.507987   17444 main.go:141] libmachine: (addons-022099) Calling .Close
	I0920 20:48:59.508024   17444 main.go:141] libmachine: (addons-022099) DBG | Closing plugin on server side
	I0920 20:48:59.508047   17444 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:59.508059   17444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:59.508071   17444 main.go:141] libmachine: (addons-022099) DBG | Closing plugin on server side
	I0920 20:48:59.508107   17444 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:59.508121   17444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:59.508125   17444 main.go:141] libmachine: (addons-022099) DBG | Closing plugin on server side
	I0920 20:48:59.508140   17444 main.go:141] libmachine: (addons-022099) DBG | Closing plugin on server side
	I0920 20:48:59.508161   17444 main.go:141] libmachine: (addons-022099) DBG | Closing plugin on server side
	I0920 20:48:59.508162   17444 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:59.508173   17444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:59.508176   17444 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:59.508182   17444 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:59.508190   17444 main.go:141] libmachine: (addons-022099) Calling .Close
	I0920 20:48:59.508183   17444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:59.508959   17444 main.go:141] libmachine: (addons-022099) DBG | Closing plugin on server side
	I0920 20:48:59.508980   17444 main.go:141] libmachine: (addons-022099) DBG | Closing plugin on server side
	I0920 20:48:59.509003   17444 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:59.509009   17444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:59.509081   17444 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:59.509096   17444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:59.509241   17444 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:59.509493   17444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:59.509503   17444 addons.go:475] Verifying addon metrics-server=true in "addons-022099"
	I0920 20:48:59.510538   17444 out.go:177] * Verifying registry addon...
	I0920 20:48:59.510723   17444 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-022099 service yakd-dashboard -n yakd-dashboard
	
	I0920 20:48:59.512366   17444 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 20:48:59.577054   17444 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 20:48:59.577087   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:59.636683   17444 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:59.636702   17444 main.go:141] libmachine: (addons-022099) Calling .Close
	I0920 20:48:59.636970   17444 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:59.637031   17444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:59.637060   17444 main.go:141] libmachine: (addons-022099) DBG | Closing plugin on server side
	I0920 20:48:59.648351   17444 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 20:48:59.666325   17444 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.010922171s)
	I0920 20:48:59.666334   17444 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.755794196s)
	I0920 20:48:59.666366   17444 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:59.666380   17444 main.go:141] libmachine: (addons-022099) Calling .Close
	I0920 20:48:59.666659   17444 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:59.666683   17444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:59.666693   17444 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:59.666701   17444 main.go:141] libmachine: (addons-022099) Calling .Close
	I0920 20:48:59.666898   17444 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:59.666909   17444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:59.666919   17444 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-022099"
	I0920 20:48:59.667879   17444 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 20:48:59.668639   17444 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 20:48:59.669976   17444 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 20:48:59.670660   17444 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 20:48:59.671179   17444 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 20:48:59.671202   17444 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 20:48:59.738064   17444 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 20:48:59.738092   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:59.741055   17444 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 20:48:59.741073   17444 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 20:48:59.823182   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:59.915722   17444 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 20:48:59.915744   17444 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 20:48:59.964490   17444 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 20:49:00.018930   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:00.184304   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:00.284819   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:00.345060   17444 pod_ready.go:103] pod "coredns-7c65d6cfc9-5sk8c" in "kube-system" namespace has status "Ready":"False"
	I0920 20:49:00.517097   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:00.675961   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:00.784550   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:01.026282   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:01.182193   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:01.286546   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:01.517809   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:01.704949   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:01.814999   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:02.015924   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:02.068608   17444 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.104082533s)
	I0920 20:49:02.068650   17444 main.go:141] libmachine: Making call to close driver server
	I0920 20:49:02.068662   17444 main.go:141] libmachine: (addons-022099) Calling .Close
	I0920 20:49:02.068720   17444 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.42032714s)
	I0920 20:49:02.068767   17444 main.go:141] libmachine: Making call to close driver server
	I0920 20:49:02.068786   17444 main.go:141] libmachine: (addons-022099) Calling .Close
	I0920 20:49:02.068896   17444 main.go:141] libmachine: (addons-022099) DBG | Closing plugin on server side
	I0920 20:49:02.068901   17444 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:49:02.068924   17444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:49:02.068932   17444 main.go:141] libmachine: Making call to close driver server
	I0920 20:49:02.068940   17444 main.go:141] libmachine: (addons-022099) Calling .Close
	I0920 20:49:02.068997   17444 main.go:141] libmachine: (addons-022099) DBG | Closing plugin on server side
	I0920 20:49:02.069091   17444 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:49:02.069131   17444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:49:02.069148   17444 main.go:141] libmachine: Making call to close driver server
	I0920 20:49:02.069156   17444 main.go:141] libmachine: (addons-022099) Calling .Close
	I0920 20:49:02.069159   17444 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:49:02.069171   17444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:49:02.069381   17444 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:49:02.069394   17444 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:49:02.071011   17444 addons.go:475] Verifying addon gcp-auth=true in "addons-022099"
	I0920 20:49:02.072868   17444 out.go:177] * Verifying gcp-auth addon...
	I0920 20:49:02.075320   17444 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 20:49:02.078397   17444 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 20:49:02.175704   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:02.290483   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:02.516843   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:02.675546   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:02.784683   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:02.852318   17444 pod_ready.go:103] pod "coredns-7c65d6cfc9-5sk8c" in "kube-system" namespace has status "Ready":"False"
	I0920 20:49:03.017108   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:03.176270   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:03.284703   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:03.516748   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:03.679189   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:03.785127   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:04.016700   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:04.175494   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:04.284347   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:04.516795   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:04.675889   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:04.784755   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:05.016994   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:05.176021   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:05.285467   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:05.345011   17444 pod_ready.go:103] pod "coredns-7c65d6cfc9-5sk8c" in "kube-system" namespace has status "Ready":"False"
	I0920 20:49:05.516503   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:05.676614   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:05.784681   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:06.017269   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:06.176140   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:06.287385   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:06.516788   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:06.676161   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:06.783790   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:07.016569   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:07.176228   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:07.285723   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:07.516419   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:07.682454   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:08.066041   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:08.067074   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:08.075476   17444 pod_ready.go:103] pod "coredns-7c65d6cfc9-5sk8c" in "kube-system" namespace has status "Ready":"False"
	I0920 20:49:08.183659   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:08.284965   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:08.516611   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:08.675798   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:08.785234   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:09.017069   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:09.175082   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:09.283842   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:09.516776   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:09.675181   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:09.784036   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:10.016178   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:10.176529   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:10.284697   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:10.345421   17444 pod_ready.go:103] pod "coredns-7c65d6cfc9-5sk8c" in "kube-system" namespace has status "Ready":"False"
	I0920 20:49:10.518513   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:10.675461   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:10.785487   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:11.016135   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:11.175700   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:11.284974   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:11.751063   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:11.753803   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:11.851436   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:12.016250   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:12.175979   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:12.283786   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:12.517585   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:12.685778   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:12.788092   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:12.845245   17444 pod_ready.go:103] pod "coredns-7c65d6cfc9-5sk8c" in "kube-system" namespace has status "Ready":"False"
	I0920 20:49:13.016559   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:13.175161   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:13.284450   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:13.516887   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:13.674925   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:13.785567   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:14.016258   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:14.174802   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:14.284555   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:14.516041   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:14.675237   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:14.784337   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:15.016508   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:15.176220   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:15.284220   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:15.345652   17444 pod_ready.go:103] pod "coredns-7c65d6cfc9-5sk8c" in "kube-system" namespace has status "Ready":"False"
	I0920 20:49:15.515521   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:15.675083   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:15.784973   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:16.017122   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:16.176043   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:16.291229   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:16.530903   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:16.675776   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:16.796250   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:17.017023   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:17.175808   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:17.286371   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:17.516697   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:17.675940   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:17.784356   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:17.980649   17444 pod_ready.go:103] pod "coredns-7c65d6cfc9-5sk8c" in "kube-system" namespace has status "Ready":"False"
	I0920 20:49:18.018610   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:18.175879   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:18.284701   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:18.516642   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:18.675859   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:18.784354   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:19.016841   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:19.175474   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:19.284790   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:19.517735   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:19.973852   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:19.976025   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:20.019918   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:20.177428   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:20.287766   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:20.344238   17444 pod_ready.go:103] pod "coredns-7c65d6cfc9-5sk8c" in "kube-system" namespace has status "Ready":"False"
	I0920 20:49:20.518368   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:20.675623   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:20.784927   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:21.016835   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:21.175669   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:21.283994   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:21.516344   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:21.675117   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:21.784795   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:22.018033   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:22.175387   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:22.283872   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:22.345287   17444 pod_ready.go:103] pod "coredns-7c65d6cfc9-5sk8c" in "kube-system" namespace has status "Ready":"False"
	I0920 20:49:22.516432   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:22.675225   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:22.784609   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:22.844655   17444 pod_ready.go:93] pod "coredns-7c65d6cfc9-5sk8c" in "kube-system" namespace has status "Ready":"True"
	I0920 20:49:22.844677   17444 pod_ready.go:82] duration metric: took 33.505993874s for pod "coredns-7c65d6cfc9-5sk8c" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:22.844685   17444 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hv6q5" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:22.846636   17444 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-hv6q5" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-hv6q5" not found
	I0920 20:49:22.846656   17444 pod_ready.go:82] duration metric: took 1.965508ms for pod "coredns-7c65d6cfc9-hv6q5" in "kube-system" namespace to be "Ready" ...
	E0920 20:49:22.846664   17444 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-hv6q5" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-hv6q5" not found
	I0920 20:49:22.846669   17444 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-022099" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:22.850969   17444 pod_ready.go:93] pod "etcd-addons-022099" in "kube-system" namespace has status "Ready":"True"
	I0920 20:49:22.850985   17444 pod_ready.go:82] duration metric: took 4.310508ms for pod "etcd-addons-022099" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:22.850992   17444 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-022099" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:22.857602   17444 pod_ready.go:93] pod "kube-apiserver-addons-022099" in "kube-system" namespace has status "Ready":"True"
	I0920 20:49:22.857617   17444 pod_ready.go:82] duration metric: took 6.620645ms for pod "kube-apiserver-addons-022099" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:22.857626   17444 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-022099" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:22.862584   17444 pod_ready.go:93] pod "kube-controller-manager-addons-022099" in "kube-system" namespace has status "Ready":"True"
	I0920 20:49:22.862599   17444 pod_ready.go:82] duration metric: took 4.968427ms for pod "kube-controller-manager-addons-022099" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:22.862606   17444 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tcxgk" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:23.016576   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:23.042519   17444 pod_ready.go:93] pod "kube-proxy-tcxgk" in "kube-system" namespace has status "Ready":"True"
	I0920 20:49:23.042539   17444 pod_ready.go:82] duration metric: took 179.927082ms for pod "kube-proxy-tcxgk" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:23.042547   17444 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-022099" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:23.175293   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:23.284462   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:23.443480   17444 pod_ready.go:93] pod "kube-scheduler-addons-022099" in "kube-system" namespace has status "Ready":"True"
	I0920 20:49:23.443504   17444 pod_ready.go:82] duration metric: took 400.951356ms for pod "kube-scheduler-addons-022099" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:23.443519   17444 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-wnb6h" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:23.516869   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:23.676312   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:23.784562   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:23.842719   17444 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-wnb6h" in "kube-system" namespace has status "Ready":"True"
	I0920 20:49:23.842739   17444 pod_ready.go:82] duration metric: took 399.214149ms for pod "nvidia-device-plugin-daemonset-wnb6h" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:23.842746   17444 pod_ready.go:39] duration metric: took 34.513911502s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 20:49:23.842764   17444 api_server.go:52] waiting for apiserver process to appear ...
	I0920 20:49:23.842813   17444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 20:49:23.866499   17444 api_server.go:72] duration metric: took 37.423890978s to wait for apiserver process to appear ...
	I0920 20:49:23.866520   17444 api_server.go:88] waiting for apiserver healthz status ...
	I0920 20:49:23.866535   17444 api_server.go:253] Checking apiserver healthz at https://192.168.39.113:8443/healthz ...
	I0920 20:49:23.871195   17444 api_server.go:279] https://192.168.39.113:8443/healthz returned 200:
	ok
	I0920 20:49:23.872097   17444 api_server.go:141] control plane version: v1.31.1
	I0920 20:49:23.872115   17444 api_server.go:131] duration metric: took 5.590303ms to wait for apiserver health ...
	I0920 20:49:23.872123   17444 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 20:49:24.017535   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:24.294232   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:24.295832   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:24.298658   17444 system_pods.go:59] 17 kube-system pods found
	I0920 20:49:24.298682   17444 system_pods.go:61] "coredns-7c65d6cfc9-5sk8c" [96202c2f-ebfd-4003-8f99-38ce8408cee4] Running
	I0920 20:49:24.298691   17444 system_pods.go:61] "csi-hostpath-attacher-0" [f0169cbf-b28a-483b-8e80-07f7a84484e1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 20:49:24.298699   17444 system_pods.go:61] "csi-hostpath-resizer-0" [0b2523b3-b44d-4299-9cac-b96510cec65d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 20:49:24.298709   17444 system_pods.go:61] "csi-hostpathplugin-6z8rm" [724d7efd-a0a5-410f-9493-b439157b2a5b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 20:49:24.298714   17444 system_pods.go:61] "etcd-addons-022099" [647ca4a0-a8a2-496e-bc33-b0eef97573cc] Running
	I0920 20:49:24.298718   17444 system_pods.go:61] "kube-apiserver-addons-022099" [4579ea61-013d-4c1e-bba8-ba0b51ecb381] Running
	I0920 20:49:24.298721   17444 system_pods.go:61] "kube-controller-manager-addons-022099" [2915e32a-35d5-4661-81ee-567a8ad20b46] Running
	I0920 20:49:24.298727   17444 system_pods.go:61] "kube-ingress-dns-minikube" [c588134b-faf7-4e30-a245-cb1d3f1b7f44] Running
	I0920 20:49:24.298730   17444 system_pods.go:61] "kube-proxy-tcxgk" [9b60d5c9-84f2-4185-a6b3-3e635eb22f77] Running
	I0920 20:49:24.298734   17444 system_pods.go:61] "kube-scheduler-addons-022099" [cb69eeb8-2c82-4878-892f-ac4720e6bf37] Running
	I0920 20:49:24.298739   17444 system_pods.go:61] "metrics-server-84c5f94fbc-f5f6r" [df9fba66-473e-4b23-86f7-65a886961fee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 20:49:24.298745   17444 system_pods.go:61] "nvidia-device-plugin-daemonset-wnb6h" [1daf9992-5ba7-421d-ac66-845a7f8f95f5] Running
	I0920 20:49:24.298752   17444 system_pods.go:61] "registry-66c9cd494c-8nlrk" [ab4d0c54-9272-4637-a6f3-5c42d97b42cf] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 20:49:24.298759   17444 system_pods.go:61] "registry-proxy-bw9ps" [fa5aee16-1db8-452c-a07d-1062044723ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 20:49:24.298768   17444 system_pods.go:61] "snapshot-controller-56fcc65765-82wjf" [00fce8d1-b598-4915-971f-cbfefc08875b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 20:49:24.298776   17444 system_pods.go:61] "snapshot-controller-56fcc65765-ckbnc" [bea3d08c-0ec0-4be2-a2cd-e3b48b9daa55] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 20:49:24.298780   17444 system_pods.go:61] "storage-provisioner" [cd4aa4e0-4cdf-4d76-a3f4-4e4d8e60aedf] Running
	I0920 20:49:24.298788   17444 system_pods.go:74] duration metric: took 426.661071ms to wait for pod list to return data ...
	I0920 20:49:24.298795   17444 default_sa.go:34] waiting for default service account to be created ...
	I0920 20:49:24.301224   17444 default_sa.go:45] found service account: "default"
	I0920 20:49:24.301242   17444 default_sa.go:55] duration metric: took 2.441549ms for default service account to be created ...
	I0920 20:49:24.301249   17444 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 20:49:24.447203   17444 system_pods.go:86] 17 kube-system pods found
	I0920 20:49:24.447232   17444 system_pods.go:89] "coredns-7c65d6cfc9-5sk8c" [96202c2f-ebfd-4003-8f99-38ce8408cee4] Running
	I0920 20:49:24.447241   17444 system_pods.go:89] "csi-hostpath-attacher-0" [f0169cbf-b28a-483b-8e80-07f7a84484e1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 20:49:24.447248   17444 system_pods.go:89] "csi-hostpath-resizer-0" [0b2523b3-b44d-4299-9cac-b96510cec65d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 20:49:24.447256   17444 system_pods.go:89] "csi-hostpathplugin-6z8rm" [724d7efd-a0a5-410f-9493-b439157b2a5b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 20:49:24.447261   17444 system_pods.go:89] "etcd-addons-022099" [647ca4a0-a8a2-496e-bc33-b0eef97573cc] Running
	I0920 20:49:24.447266   17444 system_pods.go:89] "kube-apiserver-addons-022099" [4579ea61-013d-4c1e-bba8-ba0b51ecb381] Running
	I0920 20:49:24.447270   17444 system_pods.go:89] "kube-controller-manager-addons-022099" [2915e32a-35d5-4661-81ee-567a8ad20b46] Running
	I0920 20:49:24.447274   17444 system_pods.go:89] "kube-ingress-dns-minikube" [c588134b-faf7-4e30-a245-cb1d3f1b7f44] Running
	I0920 20:49:24.447278   17444 system_pods.go:89] "kube-proxy-tcxgk" [9b60d5c9-84f2-4185-a6b3-3e635eb22f77] Running
	I0920 20:49:24.447281   17444 system_pods.go:89] "kube-scheduler-addons-022099" [cb69eeb8-2c82-4878-892f-ac4720e6bf37] Running
	I0920 20:49:24.447286   17444 system_pods.go:89] "metrics-server-84c5f94fbc-f5f6r" [df9fba66-473e-4b23-86f7-65a886961fee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 20:49:24.447295   17444 system_pods.go:89] "nvidia-device-plugin-daemonset-wnb6h" [1daf9992-5ba7-421d-ac66-845a7f8f95f5] Running
	I0920 20:49:24.447300   17444 system_pods.go:89] "registry-66c9cd494c-8nlrk" [ab4d0c54-9272-4637-a6f3-5c42d97b42cf] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 20:49:24.447307   17444 system_pods.go:89] "registry-proxy-bw9ps" [fa5aee16-1db8-452c-a07d-1062044723ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 20:49:24.447315   17444 system_pods.go:89] "snapshot-controller-56fcc65765-82wjf" [00fce8d1-b598-4915-971f-cbfefc08875b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 20:49:24.447319   17444 system_pods.go:89] "snapshot-controller-56fcc65765-ckbnc" [bea3d08c-0ec0-4be2-a2cd-e3b48b9daa55] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 20:49:24.447323   17444 system_pods.go:89] "storage-provisioner" [cd4aa4e0-4cdf-4d76-a3f4-4e4d8e60aedf] Running
	I0920 20:49:24.447331   17444 system_pods.go:126] duration metric: took 146.077084ms to wait for k8s-apps to be running ...
	I0920 20:49:24.447337   17444 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 20:49:24.447380   17444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 20:49:24.465290   17444 system_svc.go:56] duration metric: took 17.946525ms WaitForService to wait for kubelet
	I0920 20:49:24.465314   17444 kubeadm.go:582] duration metric: took 38.022707736s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 20:49:24.465337   17444 node_conditions.go:102] verifying NodePressure condition ...
	I0920 20:49:24.516595   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:24.642807   17444 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 20:49:24.642838   17444 node_conditions.go:123] node cpu capacity is 2
	I0920 20:49:24.642851   17444 node_conditions.go:105] duration metric: took 177.508532ms to run NodePressure ...
	I0920 20:49:24.642862   17444 start.go:241] waiting for startup goroutines ...
	I0920 20:49:24.675108   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:24.785267   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:25.016883   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:25.176634   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:25.284344   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:25.517905   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:25.675949   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:25.784255   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:26.016578   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:26.175713   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:26.284382   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:26.516177   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:26.675570   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:26.785552   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:27.016316   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:27.176666   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:27.285142   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:27.519239   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:27.677358   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:27.787936   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:28.016975   17444 kapi.go:107] duration metric: took 28.504604421s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 20:49:28.177023   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:28.284907   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:28.675034   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:28.786409   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:29.176258   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:29.285070   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:29.675413   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:29.785177   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:30.175927   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:30.283566   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:30.681242   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:30.784568   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:31.175566   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:31.285108   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:31.675973   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:31.784047   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:32.176170   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:32.285153   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:32.675412   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:32.785400   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:33.175340   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:33.284164   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:34.015452   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:34.017160   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:34.174873   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:34.285033   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:34.674839   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:34.784366   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:35.175496   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:35.283790   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:35.697142   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:35.798201   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:36.176651   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:36.285404   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:36.679529   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:36.787235   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:37.176625   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:37.284654   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:37.675210   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:37.784416   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:38.186418   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:38.284337   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:38.675372   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:38.784388   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:39.175277   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:39.284040   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:39.675373   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:40.051002   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:40.176718   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:40.284953   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:40.684628   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:40.796676   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:41.177163   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:41.283712   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:41.675961   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:41.784415   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:42.176358   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:42.284343   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:42.681387   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:42.784979   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:43.180392   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:43.284281   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:43.675884   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:43.785236   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:44.181389   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:44.283756   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:44.674996   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:44.785072   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:45.176175   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:45.284224   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:45.683567   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:45.785181   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:46.176990   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:46.284206   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:46.675969   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:46.785201   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:47.176251   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:47.285258   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:47.675592   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:47.784413   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:48.175202   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:48.284122   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:48.681958   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:48.785213   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:49.176153   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:49.284143   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:49.675814   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:49.784735   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:50.175279   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:50.284773   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:50.677824   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:50.785716   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:51.175882   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:51.284746   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:51.675610   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:51.784635   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:52.175569   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:52.284053   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:52.682622   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:52.866687   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:53.175235   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:53.284241   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:53.686878   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:53.805105   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:54.175674   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:54.287123   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:54.681334   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:54.784738   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:55.176254   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:55.285705   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:55.678591   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:55.787277   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:56.175670   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:56.284556   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:56.675642   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:56.784951   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:57.176584   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:57.286909   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:57.675568   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:57.787705   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:58.176902   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:58.285137   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:58.675855   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:58.786941   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:59.181568   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:59.288215   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:59.676185   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:59.784616   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:00.176566   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:50:00.284424   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:00.681733   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:50:00.784757   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:01.176139   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:50:01.285796   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:01.683856   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:50:01.792399   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:02.178909   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:50:02.290066   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:02.680116   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:50:02.788019   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:03.185372   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:50:03.298754   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:03.676681   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:50:03.787347   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:04.175930   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:50:04.284655   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:04.677021   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:50:04.785282   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:05.175716   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:50:05.285221   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:05.678224   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:50:05.784158   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:06.175280   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:50:06.284822   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:06.681652   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:50:06.784797   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:07.177072   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:50:07.284680   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:07.675131   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:50:07.784462   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:08.175334   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:50:08.284892   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:08.675492   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:50:08.784948   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:09.175682   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:50:09.284465   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:09.674991   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:50:09.785094   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:10.176086   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:50:10.290062   17444 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:10.679980   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:50:10.783692   17444 kapi.go:107] duration metric: took 1m14.003542946s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0920 20:50:11.178851   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:50:11.686562   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:50:12.177161   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:50:12.681277   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:50:13.181671   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:50:13.676156   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:50:14.181361   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:50:14.681174   17444 kapi.go:107] duration metric: took 1m15.010510247s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 20:50:25.079967   17444 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 20:50:25.079992   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:25.580019   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:26.079506   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:26.579181   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:27.080300   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:27.579069   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:28.079953   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:28.579126   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:29.079996   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:29.579435   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:30.078977   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:30.580420   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:31.079509   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:31.580291   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:32.078983   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:32.580856   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:33.079210   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:33.578492   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:34.079268   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:34.580189   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:35.079065   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:35.579711   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:36.078999   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:36.579095   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:37.079660   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:37.579217   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:38.079573   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:38.578664   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:39.079565   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:39.579011   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:40.079182   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:40.578719   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:41.079217   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:41.580089   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:42.080594   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:42.579351   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:43.079508   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:43.578785   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:44.079148   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:44.579792   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:45.079602   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:45.578846   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:46.079144   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:46.578752   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:47.079741   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:47.579240   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:48.079640   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:48.578779   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:49.079366   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:49.578603   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:50.078754   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:50.579606   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:51.082474   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:51.578586   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:52.079132   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:52.578676   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:53.078980   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:53.579579   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:54.078805   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:54.579353   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:55.079732   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:55.580114   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:56.080092   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:56.580265   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:57.079862   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:57.578949   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:58.079289   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:58.578771   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:59.079391   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:59.578844   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:00.079253   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:00.578426   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:01.079241   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:01.579504   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:02.079652   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:02.578783   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:03.079205   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:03.578382   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:04.079012   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:04.579715   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:05.079519   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:05.579455   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:06.078960   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:06.579253   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:07.078314   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:07.578572   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:08.086081   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:08.579952   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:09.079553   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:09.579562   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:10.079414   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:10.578574   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:11.079490   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:11.578785   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:12.080208   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:12.578480   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:13.079329   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:13.579061   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:14.079758   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:14.579017   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:15.079869   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:15.579757   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:16.079187   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:16.579482   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:17.080271   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:17.578445   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:18.078623   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:18.578786   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:19.079588   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:19.579625   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:20.079291   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:20.579066   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:21.080099   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:21.579735   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:22.079638   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:22.579170   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:23.079745   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:23.579688   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:24.078964   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:24.579989   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:25.079903   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:25.579812   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:26.079377   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:26.579060   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:27.080085   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:27.579375   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:28.079644   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:28.579441   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:29.079279   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:29.578930   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:30.079843   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:30.579248   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:31.093030   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:31.579277   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:32.079103   17444 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:32.581029   17444 kapi.go:107] duration metric: took 2m30.505705391s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 20:51:32.582805   17444 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-022099 cluster.
	I0920 20:51:32.584263   17444 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 20:51:32.585431   17444 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 20:51:32.586839   17444 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, storage-provisioner-rancher, volcano, ingress-dns, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0920 20:51:32.588254   17444 addons.go:510] duration metric: took 2m46.14561273s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner storage-provisioner-rancher volcano ingress-dns inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0920 20:51:32.588291   17444 start.go:246] waiting for cluster config update ...
	I0920 20:51:32.588308   17444 start.go:255] writing updated cluster config ...
	I0920 20:51:32.588557   17444 ssh_runner.go:195] Run: rm -f paused
	I0920 20:51:32.639387   17444 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 20:51:32.641239   17444 out.go:177] * Done! kubectl is now configured to use "addons-022099" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 20 21:01:23 addons-022099 dockerd[1197]: time="2024-09-20T21:01:23.688644203Z" level=info msg="ignoring event" container=f23d3b063c90c5ca5f26a0abfeec01d8b28d3a7fc06dfb6d500ef2fbc21dfd90 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 21:01:23 addons-022099 dockerd[1203]: time="2024-09-20T21:01:23.688947524Z" level=info msg="shim disconnected" id=f23d3b063c90c5ca5f26a0abfeec01d8b28d3a7fc06dfb6d500ef2fbc21dfd90 namespace=moby
	Sep 20 21:01:23 addons-022099 dockerd[1203]: time="2024-09-20T21:01:23.689004330Z" level=warning msg="cleaning up after shim disconnected" id=f23d3b063c90c5ca5f26a0abfeec01d8b28d3a7fc06dfb6d500ef2fbc21dfd90 namespace=moby
	Sep 20 21:01:23 addons-022099 dockerd[1203]: time="2024-09-20T21:01:23.689014150Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 20 21:01:23 addons-022099 dockerd[1203]: time="2024-09-20T21:01:23.697571757Z" level=warning msg="cleanup warnings time=\"2024-09-20T21:01:23Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 20 21:01:25 addons-022099 dockerd[1197]: time="2024-09-20T21:01:25.115323581Z" level=info msg="ignoring event" container=d62e427408193d81dfeef6197dfd783599f8981df17328270a7bcca009e26646 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 21:01:25 addons-022099 dockerd[1203]: time="2024-09-20T21:01:25.115460482Z" level=info msg="shim disconnected" id=d62e427408193d81dfeef6197dfd783599f8981df17328270a7bcca009e26646 namespace=moby
	Sep 20 21:01:25 addons-022099 dockerd[1203]: time="2024-09-20T21:01:25.115574758Z" level=warning msg="cleaning up after shim disconnected" id=d62e427408193d81dfeef6197dfd783599f8981df17328270a7bcca009e26646 namespace=moby
	Sep 20 21:01:25 addons-022099 dockerd[1203]: time="2024-09-20T21:01:25.115587586Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 20 21:01:25 addons-022099 dockerd[1197]: time="2024-09-20T21:01:25.617198197Z" level=info msg="ignoring event" container=ec8521f283a2db5061676253794240adacf6c95978b8de0146edb25d80393eb5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 21:01:25 addons-022099 dockerd[1203]: time="2024-09-20T21:01:25.623804039Z" level=info msg="shim disconnected" id=ec8521f283a2db5061676253794240adacf6c95978b8de0146edb25d80393eb5 namespace=moby
	Sep 20 21:01:25 addons-022099 dockerd[1203]: time="2024-09-20T21:01:25.623911431Z" level=warning msg="cleaning up after shim disconnected" id=ec8521f283a2db5061676253794240adacf6c95978b8de0146edb25d80393eb5 namespace=moby
	Sep 20 21:01:25 addons-022099 dockerd[1203]: time="2024-09-20T21:01:25.623922373Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 20 21:01:25 addons-022099 dockerd[1197]: time="2024-09-20T21:01:25.646930036Z" level=info msg="ignoring event" container=d94b8f885a1687b75f45532b313df69fe51bec9c24d7000c1912c58b3d576237 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 21:01:25 addons-022099 dockerd[1203]: time="2024-09-20T21:01:25.647836301Z" level=info msg="shim disconnected" id=d94b8f885a1687b75f45532b313df69fe51bec9c24d7000c1912c58b3d576237 namespace=moby
	Sep 20 21:01:25 addons-022099 dockerd[1203]: time="2024-09-20T21:01:25.648559343Z" level=warning msg="cleaning up after shim disconnected" id=d94b8f885a1687b75f45532b313df69fe51bec9c24d7000c1912c58b3d576237 namespace=moby
	Sep 20 21:01:25 addons-022099 dockerd[1203]: time="2024-09-20T21:01:25.648685621Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 20 21:01:25 addons-022099 dockerd[1197]: time="2024-09-20T21:01:25.797849321Z" level=info msg="ignoring event" container=68bd3aa284e4e6620d6f64e837b6674dde49db745c73da2ef909facb3f883e58 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 21:01:25 addons-022099 dockerd[1203]: time="2024-09-20T21:01:25.798507472Z" level=info msg="shim disconnected" id=68bd3aa284e4e6620d6f64e837b6674dde49db745c73da2ef909facb3f883e58 namespace=moby
	Sep 20 21:01:25 addons-022099 dockerd[1203]: time="2024-09-20T21:01:25.798564731Z" level=warning msg="cleaning up after shim disconnected" id=68bd3aa284e4e6620d6f64e837b6674dde49db745c73da2ef909facb3f883e58 namespace=moby
	Sep 20 21:01:25 addons-022099 dockerd[1203]: time="2024-09-20T21:01:25.798574205Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 20 21:01:25 addons-022099 dockerd[1203]: time="2024-09-20T21:01:25.916493681Z" level=info msg="shim disconnected" id=2abd21892a3499fce91295bccef3bd6a218a9c06f1e0889dec9bcb6d8ac28ea7 namespace=moby
	Sep 20 21:01:25 addons-022099 dockerd[1197]: time="2024-09-20T21:01:25.917459924Z" level=info msg="ignoring event" container=2abd21892a3499fce91295bccef3bd6a218a9c06f1e0889dec9bcb6d8ac28ea7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 21:01:25 addons-022099 dockerd[1203]: time="2024-09-20T21:01:25.917660244Z" level=warning msg="cleaning up after shim disconnected" id=2abd21892a3499fce91295bccef3bd6a218a9c06f1e0889dec9bcb6d8ac28ea7 namespace=moby
	Sep 20 21:01:25 addons-022099 dockerd[1203]: time="2024-09-20T21:01:25.917698209Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                         ATTEMPT             POD ID              POD
	07b4068bcac4a       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                  39 seconds ago      Running             hello-world-app              0                   33bd11d5f3ff7       hello-world-app-55bf9c44b4-l7nzm
	7287ede4cef34       a416a98b71e22                                                                                                                44 seconds ago      Exited              helper-pod                   0                   81f5dbbd1b3fd       helper-pod-delete-pvc-cfafe1ad-5fed-43fd-aea8-6c7af2b579b8
	45fb954ad64f3       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                                48 seconds ago      Running             nginx                        0                   f9675175d8565       nginx
	1496489ae661d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                     0                   269142e068de6       gcp-auth-89d5ffd79-shv56
	92b7296bba6bf       ce263a8653f9c                                                                                                                11 minutes ago      Exited              patch                        1                   6a0e67e150f70       ingress-nginx-admission-patch-nfz2n
	45faab347a046       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                       0                   d81c0ed228ea0       ingress-nginx-admission-create-8v257
	44c4ee49bd750       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280      11 minutes ago      Running             volume-snapshot-controller   0                   aa2f8187b1f42       snapshot-controller-56fcc65765-ckbnc
	371a58bb85915       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280      11 minutes ago      Running             volume-snapshot-controller   0                   394d2f35376e6       snapshot-controller-56fcc65765-82wjf
	9960446ec2059       6e38f40d628db                                                                                                                12 minutes ago      Running             storage-provisioner          0                   5bf46a2e0920d       storage-provisioner
	fb87ee183c551       c69fa2e9cbf5f                                                                                                                12 minutes ago      Running             coredns                      0                   e36a80ef3603c       coredns-7c65d6cfc9-5sk8c
	0009dfd9a9320       60c005f310ff3                                                                                                                12 minutes ago      Running             kube-proxy                   0                   1fbf5d3cbebb4       kube-proxy-tcxgk
	1503c98291041       2e96e5913fc06                                                                                                                12 minutes ago      Running             etcd                         0                   301a51fe34b8a       etcd-addons-022099
	c7b8a8f544d96       6bab7719df100                                                                                                                12 minutes ago      Running             kube-apiserver               0                   9d0de59d1a28f       kube-apiserver-addons-022099
	c2ef627071d59       9aa1fad941575                                                                                                                12 minutes ago      Running             kube-scheduler               0                   6edf6ab6a0a25       kube-scheduler-addons-022099
	13255991fbac3       175ffd71cce3d                                                                                                                12 minutes ago      Running             kube-controller-manager      0                   bed7702223503       kube-controller-manager-addons-022099
	
	
	==> coredns [fb87ee183c55] <==
	[INFO] 10.244.0.6:34506 - 54578 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000103559s
	[INFO] 10.244.0.6:36410 - 23129 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000123242s
	[INFO] 10.244.0.6:36410 - 44886 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000196272s
	[INFO] 10.244.0.6:38960 - 26572 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000087632s
	[INFO] 10.244.0.6:38960 - 59338 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000371862s
	[INFO] 10.244.0.6:40728 - 49888 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000067643s
	[INFO] 10.244.0.6:40728 - 40418 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000237585s
	[INFO] 10.244.0.6:37345 - 10661 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000139976s
	[INFO] 10.244.0.6:37345 - 62368 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000039211s
	[INFO] 10.244.0.6:34093 - 60765 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000176614s
	[INFO] 10.244.0.6:55909 - 51548 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000077278s
	[INFO] 10.244.0.6:55909 - 14426 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000165694s
	[INFO] 10.244.0.6:34093 - 53087 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.001240787s
	[INFO] 10.244.0.6:45611 - 62112 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0001556s
	[INFO] 10.244.0.6:45611 - 51362 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000136231s
	[INFO] 10.244.0.6:53903 - 56530 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000086333s
	[INFO] 10.244.0.6:53903 - 42960 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000167243s
	[INFO] 10.244.0.25:55985 - 56062 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000521233s
	[INFO] 10.244.0.25:59568 - 31872 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000209181s
	[INFO] 10.244.0.25:42057 - 54431 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00012196s
	[INFO] 10.244.0.25:52260 - 19706 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00012168s
	[INFO] 10.244.0.25:59030 - 28776 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000082732s
	[INFO] 10.244.0.25:59810 - 51913 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000089984s
	[INFO] 10.244.0.25:56348 - 15816 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000857777s
	[INFO] 10.244.0.25:39978 - 36341 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001339566s
	
	
	==> describe nodes <==
	Name:               addons-022099
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-022099
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=addons-022099
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T20_48_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-022099
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 20:48:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-022099
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 21:01:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 21:01:17 +0000   Fri, 20 Sep 2024 20:48:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 21:01:17 +0000   Fri, 20 Sep 2024 20:48:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 21:01:17 +0000   Fri, 20 Sep 2024 20:48:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 21:01:17 +0000   Fri, 20 Sep 2024 20:48:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.113
	  Hostname:    addons-022099
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 ade5546a574d4e65beddb32584076969
	  System UUID:                ade5546a-574d-4e65-bedd-b32584076969
	  Boot ID:                    c0819bc4-4285-4471-99ea-75e94d94bde1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  default                     hello-world-app-55bf9c44b4-l7nzm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  gcp-auth                    gcp-auth-89d5ffd79-shv56                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-5sk8c                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-022099                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-022099             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-022099    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-tcxgk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-022099             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-56fcc65765-82wjf     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-56fcc65765-ckbnc     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-022099 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-022099 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-022099 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node addons-022099 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node addons-022099 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node addons-022099 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m                kubelet          Node addons-022099 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node addons-022099 event: Registered Node addons-022099 in Controller
	
	
	==> dmesg <==
	[  +6.005676] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.389217] kauditd_printk_skb: 50 callbacks suppressed
	[  +5.002146] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.351167] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.030582] kauditd_printk_skb: 66 callbacks suppressed
	[Sep20 20:50] kauditd_printk_skb: 11 callbacks suppressed
	[  +7.302756] kauditd_printk_skb: 16 callbacks suppressed
	[ +33.038293] kauditd_printk_skb: 32 callbacks suppressed
	[Sep20 20:51] kauditd_printk_skb: 28 callbacks suppressed
	[ +23.969225] kauditd_printk_skb: 40 callbacks suppressed
	[  +9.171672] kauditd_printk_skb: 9 callbacks suppressed
	[ +11.943242] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.508634] kauditd_printk_skb: 2 callbacks suppressed
	[Sep20 20:52] kauditd_printk_skb: 20 callbacks suppressed
	[ +20.323527] kauditd_printk_skb: 2 callbacks suppressed
	[Sep20 20:56] kauditd_printk_skb: 28 callbacks suppressed
	[Sep20 21:00] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.220950] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.566996] kauditd_printk_skb: 22 callbacks suppressed
	[  +6.154590] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.122384] kauditd_printk_skb: 48 callbacks suppressed
	[  +5.708110] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.027951] kauditd_printk_skb: 45 callbacks suppressed
	[Sep20 21:01] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.392108] kauditd_printk_skb: 62 callbacks suppressed
	
	
	==> etcd [1503c9829104] <==
	{"level":"warn","ts":"2024-09-20T20:51:53.294514Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.270358ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-09-20T20:51:53.294568Z","caller":"traceutil/trace.go:171","msg":"trace[691070983] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1573; }","duration":"136.385801ms","start":"2024-09-20T20:51:53.158173Z","end":"2024-09-20T20:51:53.294559Z","steps":["trace[691070983] 'agreement among raft nodes before linearized reading'  (duration: 135.806996ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T20:51:56.392410Z","caller":"traceutil/trace.go:171","msg":"trace[112075901] transaction","detail":"{read_only:false; response_revision:1577; number_of_response:1; }","duration":"374.044479ms","start":"2024-09-20T20:51:56.018346Z","end":"2024-09-20T20:51:56.392390Z","steps":["trace[112075901] 'process raft request'  (duration: 373.575987ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T20:51:56.393229Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T20:51:56.018332Z","time spent":"374.142451ms","remote":"127.0.0.1:57886","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":796,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/registry-proxy-bw9ps.17f70edeb749ac80\" mod_revision:1377 > success:<request_put:<key:\"/registry/events/kube-system/registry-proxy-bw9ps.17f70edeb749ac80\" value_size:712 lease:5115968304260686328 >> failure:<request_range:<key:\"/registry/events/kube-system/registry-proxy-bw9ps.17f70edeb749ac80\" > >"}
	{"level":"info","ts":"2024-09-20T20:51:56.393493Z","caller":"traceutil/trace.go:171","msg":"trace[986137453] linearizableReadLoop","detail":"{readStateIndex:1635; appliedIndex:1634; }","duration":"368.83276ms","start":"2024-09-20T20:51:56.023348Z","end":"2024-09-20T20:51:56.392181Z","steps":["trace[986137453] 'read index received'  (duration: 368.773614ms)","trace[986137453] 'applied index is now lower than readState.Index'  (duration: 58.476µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T20:51:56.393620Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"370.264554ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T20:51:56.393660Z","caller":"traceutil/trace.go:171","msg":"trace[186636880] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1577; }","duration":"370.309474ms","start":"2024-09-20T20:51:56.023344Z","end":"2024-09-20T20:51:56.393654Z","steps":["trace[186636880] 'agreement among raft nodes before linearized reading'  (duration: 370.243783ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T20:51:56.393680Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T20:51:56.023318Z","time spent":"370.357484ms","remote":"127.0.0.1:57980","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-09-20T20:51:56.399021Z","caller":"traceutil/trace.go:171","msg":"trace[860730215] transaction","detail":"{read_only:false; response_revision:1578; number_of_response:1; }","duration":"218.391836ms","start":"2024-09-20T20:51:56.180616Z","end":"2024-09-20T20:51:56.399008Z","steps":["trace[860730215] 'process raft request'  (duration: 217.403049ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T20:51:56.400413Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.721012ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T20:51:56.400466Z","caller":"traceutil/trace.go:171","msg":"trace[2045309666] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1578; }","duration":"168.776181ms","start":"2024-09-20T20:51:56.231681Z","end":"2024-09-20T20:51:56.400457Z","steps":["trace[2045309666] 'agreement among raft nodes before linearized reading'  (duration: 168.702704ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T20:58:37.874799Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1887}
	{"level":"info","ts":"2024-09-20T20:58:37.979800Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1887,"took":"104.082588ms","hash":2097448287,"current-db-size-bytes":9084928,"current-db-size":"9.1 MB","current-db-size-in-use-bytes":5013504,"current-db-size-in-use":"5.0 MB"}
	{"level":"info","ts":"2024-09-20T20:58:37.979871Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2097448287,"revision":1887,"compact-revision":-1}
	{"level":"info","ts":"2024-09-20T21:00:21.231424Z","caller":"traceutil/trace.go:171","msg":"trace[1864982182] linearizableReadLoop","detail":"{readStateIndex:2661; appliedIndex:2660; }","duration":"118.575692ms","start":"2024-09-20T21:00:21.112809Z","end":"2024-09-20T21:00:21.231384Z","steps":["trace[1864982182] 'read index received'  (duration: 118.398768ms)","trace[1864982182] 'applied index is now lower than readState.Index'  (duration: 176.301µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T21:00:21.231656Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.76734ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/\" range_end:\"/registry/pods0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-20T21:00:21.231686Z","caller":"traceutil/trace.go:171","msg":"trace[1035473822] range","detail":"{range_begin:/registry/pods/; range_end:/registry/pods0; response_count:0; response_revision:2492; }","duration":"118.875623ms","start":"2024-09-20T21:00:21.112805Z","end":"2024-09-20T21:00:21.231681Z","steps":["trace[1035473822] 'agreement among raft nodes before linearized reading'  (duration: 118.737309ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T21:00:21.470230Z","caller":"traceutil/trace.go:171","msg":"trace[701012565] linearizableReadLoop","detail":"{readStateIndex:2662; appliedIndex:2661; }","duration":"236.609007ms","start":"2024-09-20T21:00:21.233607Z","end":"2024-09-20T21:00:21.470216Z","steps":["trace[701012565] 'read index received'  (duration: 232.361749ms)","trace[701012565] 'applied index is now lower than readState.Index'  (duration: 4.246204ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T21:00:21.470324Z","caller":"traceutil/trace.go:171","msg":"trace[1204245925] transaction","detail":"{read_only:false; response_revision:2493; number_of_response:1; }","duration":"237.777379ms","start":"2024-09-20T21:00:21.232533Z","end":"2024-09-20T21:00:21.470311Z","steps":["trace[1204245925] 'process raft request'  (duration: 233.554887ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T21:00:21.470539Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"236.920567ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T21:00:21.470564Z","caller":"traceutil/trace.go:171","msg":"trace[149700011] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2493; }","duration":"236.953772ms","start":"2024-09-20T21:00:21.233603Z","end":"2024-09-20T21:00:21.470557Z","steps":["trace[149700011] 'agreement among raft nodes before linearized reading'  (duration: 236.852905ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T21:00:21.470643Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.467815ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/gadget\" ","response":"range_response_count:1 size:573"}
	{"level":"info","ts":"2024-09-20T21:00:21.470697Z","caller":"traceutil/trace.go:171","msg":"trace[26661698] range","detail":"{range_begin:/registry/namespaces/gadget; range_end:; response_count:1; response_revision:2493; }","duration":"149.52555ms","start":"2024-09-20T21:00:21.321162Z","end":"2024-09-20T21:00:21.470688Z","steps":["trace[26661698] 'agreement among raft nodes before linearized reading'  (duration: 149.406781ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T21:00:21.470843Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.101568ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" ","response":"range_response_count:1 size:2270"}
	{"level":"info","ts":"2024-09-20T21:00:21.470860Z","caller":"traceutil/trace.go:171","msg":"trace[528844669] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io; range_end:; response_count:1; response_revision:2493; }","duration":"124.118591ms","start":"2024-09-20T21:00:21.346735Z","end":"2024-09-20T21:00:21.470853Z","steps":["trace[528844669] 'agreement among raft nodes before linearized reading'  (duration: 124.046346ms)"],"step_count":1}
	
	
	==> gcp-auth [1496489ae661] <==
	2024/09/20 20:52:12 Ready to write response ...
	2024/09/20 20:52:12 Ready to marshal response ...
	2024/09/20 20:52:12 Ready to write response ...
	2024/09/20 21:00:15 Ready to marshal response ...
	2024/09/20 21:00:15 Ready to write response ...
	2024/09/20 21:00:15 Ready to marshal response ...
	2024/09/20 21:00:15 Ready to write response ...
	2024/09/20 21:00:15 Ready to marshal response ...
	2024/09/20 21:00:15 Ready to write response ...
	2024/09/20 21:00:24 Ready to marshal response ...
	2024/09/20 21:00:24 Ready to write response ...
	2024/09/20 21:00:32 Ready to marshal response ...
	2024/09/20 21:00:32 Ready to write response ...
	2024/09/20 21:00:32 Ready to marshal response ...
	2024/09/20 21:00:32 Ready to write response ...
	2024/09/20 21:00:34 Ready to marshal response ...
	2024/09/20 21:00:34 Ready to write response ...
	2024/09/20 21:00:41 Ready to marshal response ...
	2024/09/20 21:00:41 Ready to write response ...
	2024/09/20 21:00:45 Ready to marshal response ...
	2024/09/20 21:00:45 Ready to write response ...
	2024/09/20 21:00:46 Ready to marshal response ...
	2024/09/20 21:00:46 Ready to write response ...
	2024/09/20 21:01:14 Ready to marshal response ...
	2024/09/20 21:01:14 Ready to write response ...
	
	
	==> kernel <==
	 21:01:26 up 13 min,  0 users,  load average: 0.84, 0.61, 0.56
	Linux addons-022099 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c7b8a8f544d9] <==
	I0920 20:52:03.289271       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0920 20:52:03.475737       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0920 20:52:03.936413       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0920 20:52:03.949553       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0920 20:52:04.039087       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0920 20:52:04.041257       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0920 20:52:04.476342       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0920 20:52:04.488231       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0920 20:52:04.572704       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0920 20:52:04.951363       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0920 20:52:05.040022       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0920 20:52:05.428083       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0920 21:00:15.547703       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.226.199"}
	I0920 21:00:21.638450       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0920 21:00:22.823582       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0920 21:00:34.684909       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0920 21:00:34.850955       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.48.183"}
	I0920 21:00:39.055848       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E0920 21:00:42.966915       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0920 21:00:42.983434       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0920 21:00:42.992669       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0920 21:00:45.332912       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.223.189"}
	I0920 21:00:54.483529       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0920 21:00:57.993846       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0920 21:01:22.960789       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-hostpathplugin-sa\" not found]"
	
	
	==> kube-controller-manager [13255991fbac] <==
	I0920 21:00:47.078478       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0920 21:00:47.369299       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="15.03449ms"
	I0920 21:00:47.369757       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="50.794µs"
	W0920 21:00:54.273751       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:00:54.274115       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 21:00:57.066738       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	W0920 21:01:06.009940       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:01:06.010152       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 21:01:06.345598       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:01:06.345661       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 21:01:11.236324       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:01:11.236731       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 21:01:13.070024       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:01:13.070120       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 21:01:13.219430       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:01:13.219541       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 21:01:17.130335       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-022099"
	W0920 21:01:19.260753       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:01:19.260815       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 21:01:20.012707       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:01:20.012747       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 21:01:23.022611       1 stateful_set.go:466] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-attacher"
	I0920 21:01:23.160424       1 stateful_set.go:466] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-resizer"
	I0920 21:01:23.539213       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-022099"
	I0920 21:01:25.511972       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="3.105µs"
	
	
	==> kube-proxy [0009dfd9a932] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 20:48:48.457658       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 20:48:48.495272       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.113"]
	E0920 20:48:48.495419       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 20:48:48.674611       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 20:48:48.674691       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 20:48:48.674714       1 server_linux.go:169] "Using iptables Proxier"
	I0920 20:48:48.682596       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 20:48:48.682867       1 server.go:483] "Version info" version="v1.31.1"
	I0920 20:48:48.682879       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 20:48:48.688023       1 config.go:199] "Starting service config controller"
	I0920 20:48:48.688109       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 20:48:48.688142       1 config.go:105] "Starting endpoint slice config controller"
	I0920 20:48:48.688146       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 20:48:48.736422       1 config.go:328] "Starting node config controller"
	I0920 20:48:48.736457       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 20:48:48.793627       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 20:48:48.793696       1 shared_informer.go:320] Caches are synced for service config
	I0920 20:48:48.849282       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c2ef627071d5] <==
	W0920 20:48:39.330873       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 20:48:39.331097       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:39.328394       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 20:48:39.331325       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:40.138928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 20:48:40.138963       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:40.142566       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 20:48:40.142609       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:40.153384       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 20:48:40.153469       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:40.178944       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 20:48:40.179075       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:40.392358       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 20:48:40.392846       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:40.416867       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 20:48:40.417156       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:40.428702       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 20:48:40.429094       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 20:48:40.456755       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 20:48:40.457011       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:40.505797       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 20:48:40.506072       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:40.602552       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 20:48:40.602601       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0920 20:48:43.518736       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 21:01:24 addons-022099 kubelet[1980]: I0920 21:01:24.367518    1980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"e92ddbb7a9a16028e2daeffbde7e2979bb345b3a51229a29c25a8ddc71d9791c"} err="failed to get container status \"e92ddbb7a9a16028e2daeffbde7e2979bb345b3a51229a29c25a8ddc71d9791c\": rpc error: code = Unknown desc = Error response from daemon: No such container: e92ddbb7a9a16028e2daeffbde7e2979bb345b3a51229a29c25a8ddc71d9791c"
	Sep 20 21:01:25 addons-022099 kubelet[1980]: I0920 21:01:25.248417    1980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b2ba0e93-75fa-4bf8-8548-03fd2f58c78b-gcp-creds\") pod \"b2ba0e93-75fa-4bf8-8548-03fd2f58c78b\" (UID: \"b2ba0e93-75fa-4bf8-8548-03fd2f58c78b\") "
	Sep 20 21:01:25 addons-022099 kubelet[1980]: I0920 21:01:25.248469    1980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgb79\" (UniqueName: \"kubernetes.io/projected/b2ba0e93-75fa-4bf8-8548-03fd2f58c78b-kube-api-access-fgb79\") pod \"b2ba0e93-75fa-4bf8-8548-03fd2f58c78b\" (UID: \"b2ba0e93-75fa-4bf8-8548-03fd2f58c78b\") "
	Sep 20 21:01:25 addons-022099 kubelet[1980]: I0920 21:01:25.249103    1980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2ba0e93-75fa-4bf8-8548-03fd2f58c78b-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "b2ba0e93-75fa-4bf8-8548-03fd2f58c78b" (UID: "b2ba0e93-75fa-4bf8-8548-03fd2f58c78b"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 20 21:01:25 addons-022099 kubelet[1980]: I0920 21:01:25.256541    1980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2ba0e93-75fa-4bf8-8548-03fd2f58c78b-kube-api-access-fgb79" (OuterVolumeSpecName: "kube-api-access-fgb79") pod "b2ba0e93-75fa-4bf8-8548-03fd2f58c78b" (UID: "b2ba0e93-75fa-4bf8-8548-03fd2f58c78b"). InnerVolumeSpecName "kube-api-access-fgb79". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 21:01:25 addons-022099 kubelet[1980]: I0920 21:01:25.349336    1980 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b2ba0e93-75fa-4bf8-8548-03fd2f58c78b-gcp-creds\") on node \"addons-022099\" DevicePath \"\""
	Sep 20 21:01:25 addons-022099 kubelet[1980]: I0920 21:01:25.349387    1980 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-fgb79\" (UniqueName: \"kubernetes.io/projected/b2ba0e93-75fa-4bf8-8548-03fd2f58c78b-kube-api-access-fgb79\") on node \"addons-022099\" DevicePath \"\""
	Sep 20 21:01:25 addons-022099 kubelet[1980]: I0920 21:01:25.955158    1980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98pkp\" (UniqueName: \"kubernetes.io/projected/ab4d0c54-9272-4637-a6f3-5c42d97b42cf-kube-api-access-98pkp\") pod \"ab4d0c54-9272-4637-a6f3-5c42d97b42cf\" (UID: \"ab4d0c54-9272-4637-a6f3-5c42d97b42cf\") "
	Sep 20 21:01:25 addons-022099 kubelet[1980]: I0920 21:01:25.957757    1980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab4d0c54-9272-4637-a6f3-5c42d97b42cf-kube-api-access-98pkp" (OuterVolumeSpecName: "kube-api-access-98pkp") pod "ab4d0c54-9272-4637-a6f3-5c42d97b42cf" (UID: "ab4d0c54-9272-4637-a6f3-5c42d97b42cf"). InnerVolumeSpecName "kube-api-access-98pkp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 21:01:26 addons-022099 kubelet[1980]: I0920 21:01:26.023712    1980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b2523b3-b44d-4299-9cac-b96510cec65d" path="/var/lib/kubelet/pods/0b2523b3-b44d-4299-9cac-b96510cec65d/volumes"
	Sep 20 21:01:26 addons-022099 kubelet[1980]: I0920 21:01:26.024654    1980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="724d7efd-a0a5-410f-9493-b439157b2a5b" path="/var/lib/kubelet/pods/724d7efd-a0a5-410f-9493-b439157b2a5b/volumes"
	Sep 20 21:01:26 addons-022099 kubelet[1980]: I0920 21:01:26.026667    1980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2ba0e93-75fa-4bf8-8548-03fd2f58c78b" path="/var/lib/kubelet/pods/b2ba0e93-75fa-4bf8-8548-03fd2f58c78b/volumes"
	Sep 20 21:01:26 addons-022099 kubelet[1980]: I0920 21:01:26.027848    1980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0169cbf-b28a-483b-8e80-07f7a84484e1" path="/var/lib/kubelet/pods/f0169cbf-b28a-483b-8e80-07f7a84484e1/volumes"
	Sep 20 21:01:26 addons-022099 kubelet[1980]: I0920 21:01:26.056458    1980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hznm8\" (UniqueName: \"kubernetes.io/projected/fa5aee16-1db8-452c-a07d-1062044723ed-kube-api-access-hznm8\") pod \"fa5aee16-1db8-452c-a07d-1062044723ed\" (UID: \"fa5aee16-1db8-452c-a07d-1062044723ed\") "
	Sep 20 21:01:26 addons-022099 kubelet[1980]: I0920 21:01:26.056760    1980 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-98pkp\" (UniqueName: \"kubernetes.io/projected/ab4d0c54-9272-4637-a6f3-5c42d97b42cf-kube-api-access-98pkp\") on node \"addons-022099\" DevicePath \"\""
	Sep 20 21:01:26 addons-022099 kubelet[1980]: I0920 21:01:26.058869    1980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa5aee16-1db8-452c-a07d-1062044723ed-kube-api-access-hznm8" (OuterVolumeSpecName: "kube-api-access-hznm8") pod "fa5aee16-1db8-452c-a07d-1062044723ed" (UID: "fa5aee16-1db8-452c-a07d-1062044723ed"). InnerVolumeSpecName "kube-api-access-hznm8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 21:01:26 addons-022099 kubelet[1980]: I0920 21:01:26.157614    1980 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-hznm8\" (UniqueName: \"kubernetes.io/projected/fa5aee16-1db8-452c-a07d-1062044723ed-kube-api-access-hznm8\") on node \"addons-022099\" DevicePath \"\""
	Sep 20 21:01:26 addons-022099 kubelet[1980]: I0920 21:01:26.276987    1980 scope.go:117] "RemoveContainer" containerID="ec8521f283a2db5061676253794240adacf6c95978b8de0146edb25d80393eb5"
	Sep 20 21:01:26 addons-022099 kubelet[1980]: I0920 21:01:26.315746    1980 scope.go:117] "RemoveContainer" containerID="ec8521f283a2db5061676253794240adacf6c95978b8de0146edb25d80393eb5"
	Sep 20 21:01:26 addons-022099 kubelet[1980]: E0920 21:01:26.317633    1980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: ec8521f283a2db5061676253794240adacf6c95978b8de0146edb25d80393eb5" containerID="ec8521f283a2db5061676253794240adacf6c95978b8de0146edb25d80393eb5"
	Sep 20 21:01:26 addons-022099 kubelet[1980]: I0920 21:01:26.317678    1980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"ec8521f283a2db5061676253794240adacf6c95978b8de0146edb25d80393eb5"} err="failed to get container status \"ec8521f283a2db5061676253794240adacf6c95978b8de0146edb25d80393eb5\": rpc error: code = Unknown desc = Error response from daemon: No such container: ec8521f283a2db5061676253794240adacf6c95978b8de0146edb25d80393eb5"
	Sep 20 21:01:26 addons-022099 kubelet[1980]: I0920 21:01:26.317699    1980 scope.go:117] "RemoveContainer" containerID="d94b8f885a1687b75f45532b313df69fe51bec9c24d7000c1912c58b3d576237"
	Sep 20 21:01:26 addons-022099 kubelet[1980]: I0920 21:01:26.342586    1980 scope.go:117] "RemoveContainer" containerID="d94b8f885a1687b75f45532b313df69fe51bec9c24d7000c1912c58b3d576237"
	Sep 20 21:01:26 addons-022099 kubelet[1980]: E0920 21:01:26.343697    1980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: d94b8f885a1687b75f45532b313df69fe51bec9c24d7000c1912c58b3d576237" containerID="d94b8f885a1687b75f45532b313df69fe51bec9c24d7000c1912c58b3d576237"
	Sep 20 21:01:26 addons-022099 kubelet[1980]: I0920 21:01:26.343726    1980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"d94b8f885a1687b75f45532b313df69fe51bec9c24d7000c1912c58b3d576237"} err="failed to get container status \"d94b8f885a1687b75f45532b313df69fe51bec9c24d7000c1912c58b3d576237\": rpc error: code = Unknown desc = Error response from daemon: No such container: d94b8f885a1687b75f45532b313df69fe51bec9c24d7000c1912c58b3d576237"
	
	
	==> storage-provisioner [9960446ec205] <==
	I0920 20:48:57.067017       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 20:48:57.091769       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 20:48:57.091844       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 20:48:57.125712       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 20:48:57.125850       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-022099_9eef2a13-2e47-4df5-90f7-2df6ec14f021!
	I0920 20:48:57.150251       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"19013a08-a8b3-4a64-98bf-4657d70aaa8d", APIVersion:"v1", ResourceVersion:"737", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-022099_9eef2a13-2e47-4df5-90f7-2df6ec14f021 became leader
	I0920 20:48:57.232391       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-022099_9eef2a13-2e47-4df5-90f7-2df6ec14f021!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-022099 -n addons-022099
helpers_test.go:261: (dbg) Run:  kubectl --context addons-022099 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-022099 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-022099 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-022099/192.168.39.113
	Start Time:       Fri, 20 Sep 2024 20:52:12 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dh949 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-dh949:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m15s                   default-scheduler  Successfully assigned default/busybox to addons-022099
	  Normal   Pulling    7m45s (x4 over 9m14s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m45s (x4 over 9m14s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m45s (x4 over 9m14s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m32s (x6 over 9m14s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m10s (x20 over 9m14s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (72.57s)

                                                
                                    

Test pass (308/340)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.1
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 3.42
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.05
18 TestDownloadOnly/v1.31.1/DeleteAll 0.12
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.58
22 TestOffline 96.7
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 220.03
29 TestAddons/serial/Volcano 39.8
31 TestAddons/serial/GCPAuth/Namespaces 0.11
34 TestAddons/parallel/Ingress 19.68
35 TestAddons/parallel/InspektorGadget 12.02
36 TestAddons/parallel/MetricsServer 7.07
38 TestAddons/parallel/CSI 50.86
39 TestAddons/parallel/Headlamp 19.57
40 TestAddons/parallel/CloudSpanner 5.52
41 TestAddons/parallel/LocalPath 53.21
42 TestAddons/parallel/NvidiaDevicePlugin 5.39
43 TestAddons/parallel/Yakd 11.66
44 TestAddons/StoppedEnableDisable 8.55
45 TestCertOptions 107.55
46 TestCertExpiration 316.18
47 TestDockerFlags 53.28
48 TestForceSystemdFlag 54.63
49 TestForceSystemdEnv 107.52
51 TestKVMDriverInstallOrUpdate 3.22
55 TestErrorSpam/setup 52.01
56 TestErrorSpam/start 0.32
57 TestErrorSpam/status 0.71
58 TestErrorSpam/pause 1.19
59 TestErrorSpam/unpause 1.36
60 TestErrorSpam/stop 15.73
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 93.96
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 42.08
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.07
71 TestFunctional/serial/CacheCmd/cache/add_remote 2.53
72 TestFunctional/serial/CacheCmd/cache/add_local 0.94
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
74 TestFunctional/serial/CacheCmd/cache/list 0.04
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.13
77 TestFunctional/serial/CacheCmd/cache/delete 0.09
78 TestFunctional/serial/MinikubeKubectlCmd 0.1
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
80 TestFunctional/serial/ExtraConfig 41.57
81 TestFunctional/serial/ComponentHealth 0.06
82 TestFunctional/serial/LogsCmd 0.94
83 TestFunctional/serial/LogsFileCmd 1.01
84 TestFunctional/serial/InvalidService 4.48
86 TestFunctional/parallel/ConfigCmd 0.31
87 TestFunctional/parallel/DashboardCmd 26.67
88 TestFunctional/parallel/DryRun 0.27
89 TestFunctional/parallel/InternationalLanguage 0.13
90 TestFunctional/parallel/StatusCmd 0.89
94 TestFunctional/parallel/ServiceCmdConnect 12.52
95 TestFunctional/parallel/AddonsCmd 0.13
96 TestFunctional/parallel/PersistentVolumeClaim 46.92
98 TestFunctional/parallel/SSHCmd 0.43
99 TestFunctional/parallel/CpCmd 1.15
100 TestFunctional/parallel/MySQL 30.82
101 TestFunctional/parallel/FileSync 0.21
102 TestFunctional/parallel/CertSync 1.15
106 TestFunctional/parallel/NodeLabels 0.07
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.22
110 TestFunctional/parallel/License 0.2
111 TestFunctional/parallel/Version/short 0.05
112 TestFunctional/parallel/Version/components 0.52
113 TestFunctional/parallel/DockerEnv/bash 0.84
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.2
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.19
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
118 TestFunctional/parallel/ImageCommands/ImageBuild 2.7
119 TestFunctional/parallel/ImageCommands/Setup 1.03
120 TestFunctional/parallel/ProfileCmd/profile_not_create 0.33
130 TestFunctional/parallel/ProfileCmd/profile_list 0.37
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.11
133 TestFunctional/parallel/ServiceCmd/DeployApp 11.2
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.76
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.14
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.36
137 TestFunctional/parallel/ImageCommands/ImageRemove 0.39
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.06
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.41
140 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
141 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
142 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.08
143 TestFunctional/parallel/MountCmd/any-port 11.29
144 TestFunctional/parallel/ServiceCmd/List 0.25
145 TestFunctional/parallel/ServiceCmd/JSONOutput 0.25
146 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
147 TestFunctional/parallel/ServiceCmd/Format 0.44
148 TestFunctional/parallel/ServiceCmd/URL 0.32
149 TestFunctional/parallel/MountCmd/specific-port 1.69
150 TestFunctional/parallel/MountCmd/VerifyCleanup 0.72
151 TestFunctional/delete_echo-server_images 0.03
152 TestFunctional/delete_my-image_image 0.01
153 TestFunctional/delete_minikube_cached_images 0.01
154 TestGvisorAddon 214.04
157 TestMultiControlPlane/serial/StartCluster 213.7
158 TestMultiControlPlane/serial/DeployApp 4.67
159 TestMultiControlPlane/serial/PingHostFromPods 1.19
160 TestMultiControlPlane/serial/AddWorkerNode 63.86
161 TestMultiControlPlane/serial/NodeLabels 0.06
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.85
163 TestMultiControlPlane/serial/CopyFile 12.6
164 TestMultiControlPlane/serial/StopSecondaryNode 13.92
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.63
166 TestMultiControlPlane/serial/RestartSecondaryNode 43.3
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.84
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 295.6
169 TestMultiControlPlane/serial/DeleteSecondaryNode 7.28
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.63
171 TestMultiControlPlane/serial/StopCluster 39.07
172 TestMultiControlPlane/serial/RestartCluster 125.82
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.63
174 TestMultiControlPlane/serial/AddSecondaryNode 85.25
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.83
178 TestImageBuild/serial/Setup 48.9
179 TestImageBuild/serial/NormalBuild 1.47
180 TestImageBuild/serial/BuildWithBuildArg 0.94
181 TestImageBuild/serial/BuildWithDockerIgnore 0.63
182 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.87
186 TestJSONOutput/start/Command 207.65
187 TestJSONOutput/start/Audit 0
189 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Command 0.58
193 TestJSONOutput/pause/Audit 0
195 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Command 0.55
199 TestJSONOutput/unpause/Audit 0
201 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
204 TestJSONOutput/stop/Command 7.6
205 TestJSONOutput/stop/Audit 0
207 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
209 TestErrorJSONOutput 0.18
214 TestMainNoArgs 0.04
215 TestMinikubeProfile 100.47
218 TestMountStart/serial/StartWithMountFirst 28.16
219 TestMountStart/serial/VerifyMountFirst 0.35
220 TestMountStart/serial/StartWithMountSecond 32.78
221 TestMountStart/serial/VerifyMountSecond 0.37
222 TestMountStart/serial/DeleteFirst 0.87
223 TestMountStart/serial/VerifyMountPostDelete 0.37
224 TestMountStart/serial/Stop 2.28
225 TestMountStart/serial/RestartStopped 26.2
226 TestMountStart/serial/VerifyMountPostStop 0.36
229 TestMultiNode/serial/FreshStart2Nodes 128.76
230 TestMultiNode/serial/DeployApp2Nodes 4.75
231 TestMultiNode/serial/PingHostFrom2Pods 0.79
232 TestMultiNode/serial/AddNode 58.72
233 TestMultiNode/serial/MultiNodeLabels 0.06
234 TestMultiNode/serial/ProfileList 0.56
235 TestMultiNode/serial/CopyFile 6.85
236 TestMultiNode/serial/StopNode 3.41
237 TestMultiNode/serial/StartAfterStop 42.12
238 TestMultiNode/serial/RestartKeepsNodes 290.49
239 TestMultiNode/serial/DeleteNode 2.21
240 TestMultiNode/serial/StopMultiNode 25.05
241 TestMultiNode/serial/RestartMultiNode 117.69
242 TestMultiNode/serial/ValidateNameConflict 52.54
247 TestPreload 151.09
249 TestScheduledStopUnix 122.17
250 TestSkaffold 124.37
253 TestRunningBinaryUpgrade 187.61
255 TestKubernetesUpgrade 209.9
260 TestPause/serial/Start 91.5
270 TestPause/serial/SecondStartNoReconfiguration 60.37
271 TestStoppedBinaryUpgrade/Setup 0.43
272 TestStoppedBinaryUpgrade/Upgrade 161.46
273 TestPause/serial/Pause 0.58
274 TestPause/serial/VerifyStatus 0.24
275 TestPause/serial/Unpause 0.56
276 TestPause/serial/PauseAgain 0.75
277 TestPause/serial/DeletePaused 0.84
278 TestPause/serial/VerifyDeletedResources 0.64
279 TestStoppedBinaryUpgrade/MinikubeLogs 1.04
288 TestNoKubernetes/serial/StartNoK8sWithVersion 0.35
289 TestNoKubernetes/serial/StartWithK8s 67.68
290 TestNetworkPlugins/group/auto/Start 110.72
291 TestNoKubernetes/serial/StartWithStopK8s 12.83
292 TestNetworkPlugins/group/kindnet/Start 76.61
293 TestNoKubernetes/serial/Start 52.47
294 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
295 TestNoKubernetes/serial/ProfileList 29.31
296 TestNetworkPlugins/group/auto/KubeletFlags 0.21
297 TestNetworkPlugins/group/auto/NetCatPod 11.23
298 TestNetworkPlugins/group/auto/DNS 0.16
299 TestNetworkPlugins/group/auto/Localhost 0.12
300 TestNetworkPlugins/group/auto/HairPin 0.14
301 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
303 TestNetworkPlugins/group/kindnet/NetCatPod 11.21
304 TestNetworkPlugins/group/calico/Start 86.88
305 TestNoKubernetes/serial/Stop 2.3
306 TestNoKubernetes/serial/StartNoArgs 51.12
307 TestNetworkPlugins/group/kindnet/DNS 0.26
308 TestNetworkPlugins/group/kindnet/Localhost 0.15
309 TestNetworkPlugins/group/kindnet/HairPin 0.17
310 TestNetworkPlugins/group/custom-flannel/Start 98.33
311 TestNetworkPlugins/group/false/Start 141.51
312 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
313 TestNetworkPlugins/group/enable-default-cni/Start 142.33
314 TestNetworkPlugins/group/calico/ControllerPod 6.01
315 TestNetworkPlugins/group/calico/KubeletFlags 0.21
316 TestNetworkPlugins/group/calico/NetCatPod 11.24
317 TestNetworkPlugins/group/calico/DNS 0.21
318 TestNetworkPlugins/group/calico/Localhost 0.16
319 TestNetworkPlugins/group/calico/HairPin 0.19
320 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
321 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.26
322 TestNetworkPlugins/group/flannel/Start 83.34
323 TestNetworkPlugins/group/custom-flannel/DNS 0.21
324 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
325 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
326 TestNetworkPlugins/group/bridge/Start 82.36
327 TestNetworkPlugins/group/false/KubeletFlags 0.28
328 TestNetworkPlugins/group/false/NetCatPod 13.29
329 TestNetworkPlugins/group/false/DNS 0.18
330 TestNetworkPlugins/group/false/Localhost 0.16
331 TestNetworkPlugins/group/false/HairPin 0.14
332 TestNetworkPlugins/group/kubenet/Start 97.77
333 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.26
334 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.56
335 TestNetworkPlugins/group/flannel/ControllerPod 6.01
336 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
337 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
338 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
339 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
340 TestNetworkPlugins/group/flannel/NetCatPod 12.24
341 TestNetworkPlugins/group/flannel/DNS 0.2
342 TestNetworkPlugins/group/flannel/Localhost 0.18
343 TestNetworkPlugins/group/flannel/HairPin 0.2
345 TestStartStop/group/old-k8s-version/serial/FirstStart 176.15
346 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
347 TestNetworkPlugins/group/bridge/NetCatPod 14.31
349 TestStartStop/group/no-preload/serial/FirstStart 121.35
350 TestNetworkPlugins/group/bridge/DNS 0.19
351 TestNetworkPlugins/group/bridge/Localhost 0.15
352 TestNetworkPlugins/group/bridge/HairPin 0.14
354 TestStartStop/group/embed-certs/serial/FirstStart 91.45
355 TestNetworkPlugins/group/kubenet/KubeletFlags 0.26
356 TestNetworkPlugins/group/kubenet/NetCatPod 11.33
357 TestNetworkPlugins/group/kubenet/DNS 0.17
358 TestNetworkPlugins/group/kubenet/Localhost 0.19
359 TestNetworkPlugins/group/kubenet/HairPin 0.17
361 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 97.26
362 TestStartStop/group/embed-certs/serial/DeployApp 9.35
363 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.18
364 TestStartStop/group/embed-certs/serial/Stop 13.33
365 TestStartStop/group/no-preload/serial/DeployApp 7.34
366 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1
367 TestStartStop/group/no-preload/serial/Stop 13.34
368 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
369 TestStartStop/group/embed-certs/serial/SecondStart 305.83
370 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
371 TestStartStop/group/no-preload/serial/SecondStart 319.51
372 TestStartStop/group/old-k8s-version/serial/DeployApp 8.53
373 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.05
374 TestStartStop/group/old-k8s-version/serial/Stop 13.35
375 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.31
376 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
377 TestStartStop/group/old-k8s-version/serial/SecondStart 407.66
378 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.13
379 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.34
380 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
381 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 316.72
382 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
383 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
384 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
385 TestStartStop/group/embed-certs/serial/Pause 2.52
387 TestStartStop/group/newest-cni/serial/FirstStart 63.67
388 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 10.01
389 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
390 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.21
391 TestStartStop/group/no-preload/serial/Pause 2.58
392 TestStartStop/group/newest-cni/serial/DeployApp 0
393 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.93
394 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 7.01
395 TestStartStop/group/newest-cni/serial/Stop 13.36
396 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
397 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
398 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.52
399 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
400 TestStartStop/group/newest-cni/serial/SecondStart 37.87
401 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
402 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
403 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
404 TestStartStop/group/newest-cni/serial/Pause 2.52
405 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
406 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
407 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.21
408 TestStartStop/group/old-k8s-version/serial/Pause 2.27
x
+
TestDownloadOnly/v1.20.0/json-events (8.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-785564 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-785564 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 : (8.099556864s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0920 20:47:47.731666   16802 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0920 20:47:47.731757   16802 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-9629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-785564
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-785564: exit status 85 (55.796418ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-785564 | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC |          |
	|         | -p download-only-785564        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 20:47:39
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 20:47:39.668275   16814 out.go:345] Setting OutFile to fd 1 ...
	I0920 20:47:39.668384   16814 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:47:39.668393   16814 out.go:358] Setting ErrFile to fd 2...
	I0920 20:47:39.668398   16814 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:47:39.668580   16814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9629/.minikube/bin
	W0920 20:47:39.668692   16814 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19672-9629/.minikube/config/config.json: open /home/jenkins/minikube-integration/19672-9629/.minikube/config/config.json: no such file or directory
	I0920 20:47:39.669236   16814 out.go:352] Setting JSON to true
	I0920 20:47:39.670175   16814 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":1809,"bootTime":1726863451,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 20:47:39.670261   16814 start.go:139] virtualization: kvm guest
	I0920 20:47:39.672640   16814 out.go:97] [download-only-785564] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0920 20:47:39.672744   16814 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19672-9629/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 20:47:39.672800   16814 notify.go:220] Checking for updates...
	I0920 20:47:39.674297   16814 out.go:169] MINIKUBE_LOCATION=19672
	I0920 20:47:39.675585   16814 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 20:47:39.676808   16814 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19672-9629/kubeconfig
	I0920 20:47:39.678120   16814 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9629/.minikube
	I0920 20:47:39.679363   16814 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0920 20:47:39.681669   16814 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 20:47:39.681891   16814 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 20:47:39.777824   16814 out.go:97] Using the kvm2 driver based on user configuration
	I0920 20:47:39.777869   16814 start.go:297] selected driver: kvm2
	I0920 20:47:39.777880   16814 start.go:901] validating driver "kvm2" against <nil>
	I0920 20:47:39.778187   16814 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 20:47:39.778305   16814 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-9629/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 20:47:39.792440   16814 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 20:47:39.792496   16814 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 20:47:39.793008   16814 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0920 20:47:39.793178   16814 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 20:47:39.793209   16814 cni.go:84] Creating CNI manager for ""
	I0920 20:47:39.793274   16814 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0920 20:47:39.793342   16814 start.go:340] cluster config:
	{Name:download-only-785564 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-785564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 20:47:39.793524   16814 iso.go:125] acquiring lock: {Name:mk0664e876c81c8da8805d4583236b6d02c9f72b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 20:47:39.795354   16814 out.go:97] Downloading VM boot image ...
	I0920 20:47:39.795389   16814 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19672-9629/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 20:47:43.825271   16814 out.go:97] Starting "download-only-785564" primary control-plane node in "download-only-785564" cluster
	I0920 20:47:43.825317   16814 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 20:47:43.853851   16814 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0920 20:47:43.853887   16814 cache.go:56] Caching tarball of preloaded images
	I0920 20:47:43.854052   16814 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 20:47:43.855658   16814 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0920 20:47:43.855672   16814 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0920 20:47:43.885937   16814 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/19672-9629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-785564 host does not exist
	  To start a cluster, run: "minikube start -p download-only-785564"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-785564
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (3.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-671571 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-671571 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=kvm2 : (3.422130365s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (3.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0920 20:47:51.458135   16802 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0920 20:47:51.458174   16802 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-9629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-671571
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-671571: exit status 85 (54.325864ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-785564 | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC |                     |
	|         | -p download-only-785564        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
	| delete  | -p download-only-785564        | download-only-785564 | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
	| start   | -o=json --download-only        | download-only-671571 | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC |                     |
	|         | -p download-only-671571        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 20:47:48
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 20:47:48.071748   17033 out.go:345] Setting OutFile to fd 1 ...
	I0920 20:47:48.071853   17033 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:47:48.071862   17033 out.go:358] Setting ErrFile to fd 2...
	I0920 20:47:48.071866   17033 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:47:48.072054   17033 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9629/.minikube/bin
	I0920 20:47:48.072625   17033 out.go:352] Setting JSON to true
	I0920 20:47:48.073414   17033 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":1817,"bootTime":1726863451,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 20:47:48.073506   17033 start.go:139] virtualization: kvm guest
	I0920 20:47:48.075560   17033 out.go:97] [download-only-671571] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 20:47:48.075692   17033 notify.go:220] Checking for updates...
	I0920 20:47:48.077055   17033 out.go:169] MINIKUBE_LOCATION=19672
	I0920 20:47:48.078200   17033 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 20:47:48.079289   17033 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19672-9629/kubeconfig
	I0920 20:47:48.080290   17033 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9629/.minikube
	I0920 20:47:48.081320   17033 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-671571 host does not exist
	  To start a cluster, run: "minikube start -p download-only-671571"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-671571
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
I0920 20:47:51.991299   16802 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-825813 --alsologtostderr --binary-mirror http://127.0.0.1:43433 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-825813" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-825813
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (96.7s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-687839 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-687839 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m35.708185564s)
helpers_test.go:175: Cleaning up "offline-docker-687839" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-687839
--- PASS: TestOffline (96.70s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-022099
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-022099: exit status 85 (46.580681ms)

                                                
                                                
-- stdout --
	* Profile "addons-022099" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-022099"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-022099
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-022099: exit status 85 (47.022961ms)

                                                
                                                
-- stdout --
	* Profile "addons-022099" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-022099"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (220.03s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-022099 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-022099 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns: (3m40.031891817s)
--- PASS: TestAddons/Setup (220.03s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.8s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:843: volcano-admission stabilized in 17.958819ms
addons_test.go:835: volcano-scheduler stabilized in 17.996663ms
addons_test.go:851: volcano-controller stabilized in 18.060956ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-5nm74" [36d9fc35-7419-436a-9259-80cb4e04e357] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.004633822s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-rtr4r" [39f52759-3623-4282-bf5f-f06272c8a4db] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00532169s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-v8wpf" [5cdbea1a-863c-4df3-ac42-789e5062a4be] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.005982937s
addons_test.go:870: (dbg) Run:  kubectl --context addons-022099 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-022099 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-022099 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [80554235-843b-4857-9ee2-0e1d5955b814] Pending
helpers_test.go:344: "test-job-nginx-0" [80554235-843b-4857-9ee2-0e1d5955b814] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [80554235-843b-4857-9ee2-0e1d5955b814] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.004536275s
addons_test.go:906: (dbg) Run:  out/minikube-linux-amd64 -p addons-022099 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-linux-amd64 -p addons-022099 addons disable volcano --alsologtostderr -v=1: (10.396415118s)
--- PASS: TestAddons/serial/Volcano (39.80s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-022099 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-022099 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-022099 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-022099 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-022099 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d0ce4972-5cdb-4d84-b8f4-91b6d8c47d42] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d0ce4972-5cdb-4d84-b8f4-91b6d8c47d42] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003883962s
I0920 21:00:44.908493   16802 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p addons-022099 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-022099 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p addons-022099 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.39.113
addons_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p addons-022099 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-022099 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-amd64 -p addons-022099 addons disable ingress --alsologtostderr -v=1: (7.734022936s)
--- PASS: TestAddons/parallel/Ingress (19.68s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.02s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-lgjb8" [cf963370-e8ad-44a3-9b58-b63777212ebc] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003941495s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-022099
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-022099: (6.010432137s)
--- PASS: TestAddons/parallel/InspektorGadget (12.02s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.07s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 3.031216ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-f5f6r" [df9fba66-473e-4b23-86f7-65a886961fee] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.002711162s
addons_test.go:413: (dbg) Run:  kubectl --context addons-022099 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p addons-022099 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (7.07s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.86s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0920 21:00:39.145209   16802 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0920 21:00:39.157534   16802 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0920 21:00:39.157555   16802 kapi.go:107] duration metric: took 12.36771ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 12.375319ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-022099 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022099 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022099 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022099 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022099 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022099 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022099 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022099 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022099 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-022099 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [6adf17c4-98f6-4505-b059-c4fd06b19fc4] Pending
helpers_test.go:344: "task-pv-pod" [6adf17c4-98f6-4505-b059-c4fd06b19fc4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [6adf17c4-98f6-4505-b059-c4fd06b19fc4] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.004528907s
addons_test.go:528: (dbg) Run:  kubectl --context addons-022099 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-022099 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-022099 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-022099 delete pod task-pv-pod
addons_test.go:538: (dbg) Done: kubectl --context addons-022099 delete pod task-pv-pod: (1.025863638s)
addons_test.go:544: (dbg) Run:  kubectl --context addons-022099 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-022099 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022099 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022099 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022099 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022099 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022099 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022099 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022099 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022099 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022099 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022099 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022099 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022099 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022099 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022099 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022099 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022099 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022099 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022099 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-022099 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [ac61de18-7ff6-45b7-bb7f-eea06d649dfa] Pending
helpers_test.go:344: "task-pv-pod-restore" [ac61de18-7ff6-45b7-bb7f-eea06d649dfa] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [ac61de18-7ff6-45b7-bb7f-eea06d649dfa] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004292026s
addons_test.go:570: (dbg) Run:  kubectl --context addons-022099 delete pod task-pv-pod-restore
addons_test.go:570: (dbg) Done: kubectl --context addons-022099 delete pod task-pv-pod-restore: (1.040431016s)
addons_test.go:574: (dbg) Run:  kubectl --context addons-022099 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-022099 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p addons-022099 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p addons-022099 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.690101324s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p addons-022099 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (50.86s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-022099 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-9k52g" [458b6be2-8135-45e9-bf21-490e68742f04] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-9k52g" [458b6be2-8135-45e9-bf21-490e68742f04] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004076446s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p addons-022099 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p addons-022099 addons disable headlamp --alsologtostderr -v=1: (5.793144131s)
--- PASS: TestAddons/parallel/Headlamp (19.57s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-jtfrb" [64c704a8-231c-4379-8a18-748dcae1a303] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003724775s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-022099
--- PASS: TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.21s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-022099 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-022099 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022099 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022099 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022099 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022099 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022099 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022099 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [205380dd-a1d2-47dd-9407-c0d6137e446f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [205380dd-a1d2-47dd-9407-c0d6137e446f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [205380dd-a1d2-47dd-9407-c0d6137e446f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004532626s
addons_test.go:938: (dbg) Run:  kubectl --context addons-022099 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-amd64 -p addons-022099 ssh "cat /opt/local-path-provisioner/pvc-cfafe1ad-5fed-43fd-aea8-6c7af2b579b8_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-022099 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-022099 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-amd64 -p addons-022099 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-amd64 -p addons-022099 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.450674969s)
--- PASS: TestAddons/parallel/LocalPath (53.21s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.39s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-wnb6h" [1daf9992-5ba7-421d-ac66-845a7f8f95f5] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004378921s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-022099
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.39s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-rlnhn" [d862c91a-c14c-42e5-9692-76718340b874] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.052834662s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p addons-022099 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p addons-022099 addons disable yakd --alsologtostderr -v=1: (5.603188425s)
--- PASS: TestAddons/parallel/Yakd (11.66s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (8.55s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-022099
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-022099: (8.292650134s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-022099
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-022099
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-022099
--- PASS: TestAddons/StoppedEnableDisable (8.55s)

                                                
                                    
x
+
TestCertOptions (107.55s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-859923 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-859923 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m45.913995093s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-859923 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-859923 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-859923 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-859923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-859923
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-859923: (1.147606699s)
--- PASS: TestCertOptions (107.55s)

                                                
                                    
x
+
TestCertExpiration (316.18s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-032001 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-032001 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m46.67302207s)
E0920 21:54:09.207204   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/skaffold-267453/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-032001 --memory=2048 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-032001 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (28.418214648s)
helpers_test.go:175: Cleaning up "cert-expiration-032001" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-032001
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-032001: (1.083365481s)
--- PASS: TestCertExpiration (316.18s)

                                                
                                    
x
+
TestDockerFlags (53.28s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-138325 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-138325 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (51.739156983s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-138325 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-138325 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-138325" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-138325
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-138325: (1.085782395s)
--- PASS: TestDockerFlags (53.28s)

                                                
                                    
x
+
TestForceSystemdFlag (54.63s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-378976 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
E0920 21:51:32.649829   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-378976 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (53.366060033s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-378976 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-378976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-378976
--- PASS: TestForceSystemdFlag (54.63s)

                                                
                                    
x
+
TestForceSystemdEnv (107.52s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-580081 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-580081 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m46.361414982s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-580081 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-580081" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-580081
--- PASS: TestForceSystemdEnv (107.52s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.22s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0920 21:53:01.003330   16802 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 21:53:01.003476   16802 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0920 21:53:01.031296   16802 install.go:62] docker-machine-driver-kvm2: exit status 1
W0920 21:53:01.031618   16802 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0920 21:53:01.031697   16802 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate849717900/001/docker-machine-driver-kvm2
I0920 21:53:01.305243   16802 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate849717900/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640] Decompressors:map[bz2:0xc0001377b0 gz:0xc0001377b8 tar:0xc000137760 tar.bz2:0xc000137770 tar.gz:0xc000137780 tar.xz:0xc000137790 tar.zst:0xc0001377a0 tbz2:0xc000137770 tgz:0xc000137780 txz:0xc000137790 tzst:0xc0001377a0 xz:0xc0001377c0 zip:0xc0001377d0 zst:0xc0001377c8] Getters:map[file:0xc00086f470 http:0xc00088e320 https:0xc00088e410] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0920 21:53:01.305297   16802 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate849717900/001/docker-machine-driver-kvm2
I0920 21:53:02.755310   16802 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 21:53:02.755415   16802 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0920 21:53:02.790618   16802 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0920 21:53:02.790656   16802 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0920 21:53:02.790750   16802 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0920 21:53:02.790793   16802 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate849717900/002/docker-machine-driver-kvm2
I0920 21:53:02.954105   16802 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate849717900/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640] Decompressors:map[bz2:0xc0001377b0 gz:0xc0001377b8 tar:0xc000137760 tar.bz2:0xc000137770 tar.gz:0xc000137780 tar.xz:0xc000137790 tar.zst:0xc0001377a0 tbz2:0xc000137770 tgz:0xc000137780 txz:0xc000137790 tzst:0xc0001377a0 xz:0xc0001377c0 zip:0xc0001377d0 zst:0xc0001377c8] Getters:map[file:0xc001956260 http:0xc001ade280 https:0xc001ade2d0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0920 21:53:02.954160   16802 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate849717900/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (3.22s)

                                                
                                    
x
+
TestErrorSpam/setup (52.01s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-986160 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-986160 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-986160 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-986160 --driver=kvm2 : (52.007416148s)
--- PASS: TestErrorSpam/setup (52.01s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-986160 --log_dir /tmp/nospam-986160 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-986160 --log_dir /tmp/nospam-986160 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-986160 --log_dir /tmp/nospam-986160 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-986160 --log_dir /tmp/nospam-986160 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-986160 --log_dir /tmp/nospam-986160 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-986160 --log_dir /tmp/nospam-986160 status
--- PASS: TestErrorSpam/status (0.71s)

                                                
                                    
x
+
TestErrorSpam/pause (1.19s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-986160 --log_dir /tmp/nospam-986160 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-986160 --log_dir /tmp/nospam-986160 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-986160 --log_dir /tmp/nospam-986160 pause
--- PASS: TestErrorSpam/pause (1.19s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.36s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-986160 --log_dir /tmp/nospam-986160 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-986160 --log_dir /tmp/nospam-986160 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-986160 --log_dir /tmp/nospam-986160 unpause
--- PASS: TestErrorSpam/unpause (1.36s)

                                                
                                    
x
+
TestErrorSpam/stop (15.73s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-986160 --log_dir /tmp/nospam-986160 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-986160 --log_dir /tmp/nospam-986160 stop: (12.517870079s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-986160 --log_dir /tmp/nospam-986160 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-986160 --log_dir /tmp/nospam-986160 stop: (1.227979986s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-986160 --log_dir /tmp/nospam-986160 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-986160 --log_dir /tmp/nospam-986160 stop: (1.9856978s)
--- PASS: TestErrorSpam/stop (15.73s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19672-9629/.minikube/files/etc/test/nested/copy/16802/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (93.96s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-007742 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-007742 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m33.961359854s)
--- PASS: TestFunctional/serial/StartWithProxy (93.96s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (42.08s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0920 21:04:24.981449   16802 config.go:182] Loaded profile config "functional-007742": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-007742 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-007742 --alsologtostderr -v=8: (42.079732922s)
functional_test.go:663: soft start took 42.080395724s for "functional-007742" cluster.
I0920 21:05:07.061462   16802 config.go:182] Loaded profile config "functional-007742": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (42.08s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-007742 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-007742 /tmp/TestFunctionalserialCacheCmdcacheadd_local2184622870/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 cache add minikube-local-cache-test:functional-007742
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 cache delete minikube-local-cache-test:functional-007742
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-007742
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.94s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-007742 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (211.291096ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 kubectl -- --context functional-007742 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-007742 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.57s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-007742 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-007742 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.56978917s)
functional_test.go:761: restart took 41.569944384s for "functional-007742" cluster.
I0920 21:05:53.915921   16802 config.go:182] Loaded profile config "functional-007742": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (41.57s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-007742 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.94s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 logs
--- PASS: TestFunctional/serial/LogsCmd (0.94s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 logs --file /tmp/TestFunctionalserialLogsFileCmd4191584763/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-007742 logs --file /tmp/TestFunctionalserialLogsFileCmd4191584763/001/logs.txt: (1.011228543s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.01s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.48s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-007742 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-007742
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-007742: exit status 115 (279.949244ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.231:32679 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-007742 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-007742 delete -f testdata/invalidsvc.yaml: (1.007732578s)
--- PASS: TestFunctional/serial/InvalidService (4.48s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-007742 config get cpus: exit status 14 (52.127048ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-007742 config get cpus: exit status 14 (54.579881ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (26.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-007742 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-007742 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 26682: os: process already finished
E0920 21:06:42.903280   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/DashboardCmd (26.67s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-007742 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-007742 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (141.569146ms)

                                                
                                                
-- stdout --
	* [functional-007742] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-9629/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9629/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 21:06:14.837224   26408 out.go:345] Setting OutFile to fd 1 ...
	I0920 21:06:14.837348   26408 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:06:14.837359   26408 out.go:358] Setting ErrFile to fd 2...
	I0920 21:06:14.837365   26408 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:06:14.837631   26408 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9629/.minikube/bin
	I0920 21:06:14.838306   26408 out.go:352] Setting JSON to false
	I0920 21:06:14.839627   26408 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":2924,"bootTime":1726863451,"procs":251,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 21:06:14.839747   26408 start.go:139] virtualization: kvm guest
	I0920 21:06:14.841521   26408 out.go:177] * [functional-007742] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 21:06:14.842940   26408 notify.go:220] Checking for updates...
	I0920 21:06:14.842969   26408 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 21:06:14.844097   26408 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 21:06:14.845388   26408 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-9629/kubeconfig
	I0920 21:06:14.846445   26408 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9629/.minikube
	I0920 21:06:14.847630   26408 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 21:06:14.848947   26408 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 21:06:14.850461   26408 config.go:182] Loaded profile config "functional-007742": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 21:06:14.851065   26408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 21:06:14.851142   26408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:06:14.869673   26408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35427
	I0920 21:06:14.870242   26408 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:06:14.870838   26408 main.go:141] libmachine: Using API Version  1
	I0920 21:06:14.870865   26408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:06:14.871318   26408 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:06:14.871497   26408 main.go:141] libmachine: (functional-007742) Calling .DriverName
	I0920 21:06:14.871752   26408 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 21:06:14.872167   26408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 21:06:14.872216   26408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:06:14.887084   26408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35125
	I0920 21:06:14.887488   26408 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:06:14.888033   26408 main.go:141] libmachine: Using API Version  1
	I0920 21:06:14.888053   26408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:06:14.888432   26408 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:06:14.888642   26408 main.go:141] libmachine: (functional-007742) Calling .DriverName
	I0920 21:06:14.921623   26408 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 21:06:14.922776   26408 start.go:297] selected driver: kvm2
	I0920 21:06:14.922796   26408 start.go:901] validating driver "kvm2" against &{Name:functional-007742 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-007742 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 21:06:14.922933   26408 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 21:06:14.925078   26408 out.go:201] 
	W0920 21:06:14.926237   26408 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0920 21:06:14.927336   26408 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-007742 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-007742 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-007742 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (128.515089ms)

                                                
                                                
-- stdout --
	* [functional-007742] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-9629/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9629/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 21:06:08.385624   25915 out.go:345] Setting OutFile to fd 1 ...
	I0920 21:06:08.385752   25915 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:06:08.385761   25915 out.go:358] Setting ErrFile to fd 2...
	I0920 21:06:08.385766   25915 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:06:08.386061   25915 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9629/.minikube/bin
	I0920 21:06:08.386750   25915 out.go:352] Setting JSON to false
	I0920 21:06:08.387799   25915 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":2917,"bootTime":1726863451,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 21:06:08.387905   25915 start.go:139] virtualization: kvm guest
	I0920 21:06:08.389561   25915 out.go:177] * [functional-007742] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0920 21:06:08.391236   25915 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 21:06:08.391238   25915 notify.go:220] Checking for updates...
	I0920 21:06:08.392621   25915 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 21:06:08.393776   25915 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-9629/kubeconfig
	I0920 21:06:08.394895   25915 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9629/.minikube
	I0920 21:06:08.396063   25915 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 21:06:08.397183   25915 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 21:06:08.398655   25915 config.go:182] Loaded profile config "functional-007742": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 21:06:08.399075   25915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 21:06:08.399132   25915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:06:08.414000   25915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33779
	I0920 21:06:08.414357   25915 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:06:08.414912   25915 main.go:141] libmachine: Using API Version  1
	I0920 21:06:08.414938   25915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:06:08.415276   25915 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:06:08.415469   25915 main.go:141] libmachine: (functional-007742) Calling .DriverName
	I0920 21:06:08.415728   25915 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 21:06:08.416057   25915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 21:06:08.416096   25915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:06:08.430269   25915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33305
	I0920 21:06:08.430643   25915 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:06:08.431054   25915 main.go:141] libmachine: Using API Version  1
	I0920 21:06:08.431076   25915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:06:08.431383   25915 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:06:08.431515   25915 main.go:141] libmachine: (functional-007742) Calling .DriverName
	I0920 21:06:08.463818   25915 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0920 21:06:08.465044   25915 start.go:297] selected driver: kvm2
	I0920 21:06:08.465057   25915 start.go:901] validating driver "kvm2" against &{Name:functional-007742 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-007742 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 21:06:08.465167   25915 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 21:06:08.467022   25915 out.go:201] 
	W0920 21:06:08.468219   25915 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0920 21:06:08.469458   25915 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-007742 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-007742 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-vb6s4" [5c061ea7-c4fa-4d4b-a379-b3374838aead] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-vb6s4" [5c061ea7-c4fa-4d4b-a379-b3374838aead] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.004377739s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.231:30584
functional_test.go:1675: http://192.168.39.231:30584: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-vb6s4

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.231:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.231:30584
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.52s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [7f270291-fd60-44e9-8d83-0870d8e196d9] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004254778s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-007742 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-007742 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-007742 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-007742 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [940e8a76-f235-4687-82ba-9d40d434b51a] Pending
helpers_test.go:344: "sp-pod" [940e8a76-f235-4687-82ba-9d40d434b51a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [940e8a76-f235-4687-82ba-9d40d434b51a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004978953s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-007742 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-007742 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-007742 delete -f testdata/storage-provisioner/pod.yaml: (2.044297462s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-007742 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [eb520b80-17ac-44bd-aac5-9093bff2c4df] Pending
helpers_test.go:344: "sp-pod" [eb520b80-17ac-44bd-aac5-9093bff2c4df] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [eb520b80-17ac-44bd-aac5-9093bff2c4df] Running
2024/09/20 21:06:42 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 25.003847473s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-007742 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.92s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 ssh -n functional-007742 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 cp functional-007742:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1070314113/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 ssh -n functional-007742 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 ssh -n functional-007742 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (30.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-007742 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-r66bh" [913d6026-8101-4b75-9ed8-3ec09a3bdc0d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-r66bh" [913d6026-8101-4b75-9ed8-3ec09a3bdc0d] Running
E0920 21:06:35.219719   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:06:37.781387   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 25.004787877s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-007742 exec mysql-6cdb49bbb-r66bh -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-007742 exec mysql-6cdb49bbb-r66bh -- mysql -ppassword -e "show databases;": exit status 1 (237.971395ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 21:06:40.112409   16802 retry.go:31] will retry after 1.257591054s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-007742 exec mysql-6cdb49bbb-r66bh -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-007742 exec mysql-6cdb49bbb-r66bh -- mysql -ppassword -e "show databases;": exit status 1 (162.646705ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 21:06:41.532923   16802 retry.go:31] will retry after 2.050955487s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-007742 exec mysql-6cdb49bbb-r66bh -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-007742 exec mysql-6cdb49bbb-r66bh -- mysql -ppassword -e "show databases;": exit status 1 (136.777582ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 21:06:43.721204   16802 retry.go:31] will retry after 1.639448925s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-007742 exec mysql-6cdb49bbb-r66bh -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (30.82s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/16802/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 ssh "sudo cat /etc/test/nested/copy/16802/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/16802.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 ssh "sudo cat /etc/ssl/certs/16802.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/16802.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 ssh "sudo cat /usr/share/ca-certificates/16802.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/168022.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 ssh "sudo cat /etc/ssl/certs/168022.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/168022.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 ssh "sudo cat /usr/share/ca-certificates/168022.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-007742 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-007742 ssh "sudo systemctl is-active crio": exit status 1 (220.630863ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-007742 docker-env) && out/minikube-linux-amd64 status -p functional-007742"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-007742 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-007742 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-007742
docker.io/kicbase/echo-server:functional-007742
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-007742 image ls --format short --alsologtostderr:
I0920 21:06:22.770975   27178 out.go:345] Setting OutFile to fd 1 ...
I0920 21:06:22.771192   27178 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 21:06:22.771201   27178 out.go:358] Setting ErrFile to fd 2...
I0920 21:06:22.771206   27178 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 21:06:22.771357   27178 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9629/.minikube/bin
I0920 21:06:22.771954   27178 config.go:182] Loaded profile config "functional-007742": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 21:06:22.772044   27178 config.go:182] Loaded profile config "functional-007742": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 21:06:22.772390   27178 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0920 21:06:22.772433   27178 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 21:06:22.787525   27178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46099
I0920 21:06:22.788038   27178 main.go:141] libmachine: () Calling .GetVersion
I0920 21:06:22.788674   27178 main.go:141] libmachine: Using API Version  1
I0920 21:06:22.788696   27178 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 21:06:22.788988   27178 main.go:141] libmachine: () Calling .GetMachineName
I0920 21:06:22.789187   27178 main.go:141] libmachine: (functional-007742) Calling .GetState
I0920 21:06:22.791265   27178 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0920 21:06:22.791313   27178 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 21:06:22.808059   27178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37589
I0920 21:06:22.808561   27178 main.go:141] libmachine: () Calling .GetVersion
I0920 21:06:22.809057   27178 main.go:141] libmachine: Using API Version  1
I0920 21:06:22.809078   27178 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 21:06:22.809377   27178 main.go:141] libmachine: () Calling .GetMachineName
I0920 21:06:22.809556   27178 main.go:141] libmachine: (functional-007742) Calling .DriverName
I0920 21:06:22.809740   27178 ssh_runner.go:195] Run: systemctl --version
I0920 21:06:22.809767   27178 main.go:141] libmachine: (functional-007742) Calling .GetSSHHostname
I0920 21:06:22.812850   27178 main.go:141] libmachine: (functional-007742) DBG | domain functional-007742 has defined MAC address 52:54:00:c5:06:bc in network mk-functional-007742
I0920 21:06:22.813231   27178 main.go:141] libmachine: (functional-007742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:06:bc", ip: ""} in network mk-functional-007742: {Iface:virbr1 ExpiryTime:2024-09-20 22:03:05 +0000 UTC Type:0 Mac:52:54:00:c5:06:bc Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:functional-007742 Clientid:01:52:54:00:c5:06:bc}
I0920 21:06:22.813262   27178 main.go:141] libmachine: (functional-007742) DBG | domain functional-007742 has defined IP address 192.168.39.231 and MAC address 52:54:00:c5:06:bc in network mk-functional-007742
I0920 21:06:22.813431   27178 main.go:141] libmachine: (functional-007742) Calling .GetSSHPort
I0920 21:06:22.813678   27178 main.go:141] libmachine: (functional-007742) Calling .GetSSHKeyPath
I0920 21:06:22.813851   27178 main.go:141] libmachine: (functional-007742) Calling .GetSSHUsername
I0920 21:06:22.813993   27178 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9629/.minikube/machines/functional-007742/id_rsa Username:docker}
I0920 21:06:22.899087   27178 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0920 21:06:22.932330   27178 main.go:141] libmachine: Making call to close driver server
I0920 21:06:22.932346   27178 main.go:141] libmachine: (functional-007742) Calling .Close
I0920 21:06:22.932620   27178 main.go:141] libmachine: Successfully made call to close driver server
I0920 21:06:22.932637   27178 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 21:06:22.932647   27178 main.go:141] libmachine: Making call to close driver server
I0920 21:06:22.932655   27178 main.go:141] libmachine: (functional-007742) Calling .Close
I0920 21:06:22.932892   27178 main.go:141] libmachine: Successfully made call to close driver server
I0920 21:06:22.932910   27178 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-007742 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/nginx                     | latest            | 39286ab8a5e14 | 188MB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-007742 | 278ceb0719a89 | 30B    |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| localhost/my-image                          | functional-007742 | eadbe5427012a | 1.24MB |
| docker.io/kicbase/echo-server               | functional-007742 | 9056ab77afb8e | 4.94MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-007742 image ls --format table --alsologtostderr:
I0920 21:06:26.069223   27325 out.go:345] Setting OutFile to fd 1 ...
I0920 21:06:26.069460   27325 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 21:06:26.069470   27325 out.go:358] Setting ErrFile to fd 2...
I0920 21:06:26.069474   27325 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 21:06:26.069620   27325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9629/.minikube/bin
I0920 21:06:26.070179   27325 config.go:182] Loaded profile config "functional-007742": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 21:06:26.070270   27325 config.go:182] Loaded profile config "functional-007742": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 21:06:26.070618   27325 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0920 21:06:26.070655   27325 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 21:06:26.085513   27325 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42165
I0920 21:06:26.085926   27325 main.go:141] libmachine: () Calling .GetVersion
I0920 21:06:26.086478   27325 main.go:141] libmachine: Using API Version  1
I0920 21:06:26.086501   27325 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 21:06:26.086821   27325 main.go:141] libmachine: () Calling .GetMachineName
I0920 21:06:26.087037   27325 main.go:141] libmachine: (functional-007742) Calling .GetState
I0920 21:06:26.088846   27325 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0920 21:06:26.088887   27325 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 21:06:26.102853   27325 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46847
I0920 21:06:26.103265   27325 main.go:141] libmachine: () Calling .GetVersion
I0920 21:06:26.103693   27325 main.go:141] libmachine: Using API Version  1
I0920 21:06:26.103711   27325 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 21:06:26.103995   27325 main.go:141] libmachine: () Calling .GetMachineName
I0920 21:06:26.104145   27325 main.go:141] libmachine: (functional-007742) Calling .DriverName
I0920 21:06:26.104313   27325 ssh_runner.go:195] Run: systemctl --version
I0920 21:06:26.104349   27325 main.go:141] libmachine: (functional-007742) Calling .GetSSHHostname
I0920 21:06:26.107156   27325 main.go:141] libmachine: (functional-007742) DBG | domain functional-007742 has defined MAC address 52:54:00:c5:06:bc in network mk-functional-007742
I0920 21:06:26.107534   27325 main.go:141] libmachine: (functional-007742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:06:bc", ip: ""} in network mk-functional-007742: {Iface:virbr1 ExpiryTime:2024-09-20 22:03:05 +0000 UTC Type:0 Mac:52:54:00:c5:06:bc Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:functional-007742 Clientid:01:52:54:00:c5:06:bc}
I0920 21:06:26.107560   27325 main.go:141] libmachine: (functional-007742) DBG | domain functional-007742 has defined IP address 192.168.39.231 and MAC address 52:54:00:c5:06:bc in network mk-functional-007742
I0920 21:06:26.107767   27325 main.go:141] libmachine: (functional-007742) Calling .GetSSHPort
I0920 21:06:26.107965   27325 main.go:141] libmachine: (functional-007742) Calling .GetSSHKeyPath
I0920 21:06:26.108107   27325 main.go:141] libmachine: (functional-007742) Calling .GetSSHUsername
I0920 21:06:26.108267   27325 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9629/.minikube/machines/functional-007742/id_rsa Username:docker}
I0920 21:06:26.187486   27325 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0920 21:06:26.227351   27325 main.go:141] libmachine: Making call to close driver server
I0920 21:06:26.227371   27325 main.go:141] libmachine: (functional-007742) Calling .Close
I0920 21:06:26.227684   27325 main.go:141] libmachine: Successfully made call to close driver server
I0920 21:06:26.227723   27325 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 21:06:26.227737   27325 main.go:141] libmachine: Making call to close driver server
I0920 21:06:26.227749   27325 main.go:141] libmachine: (functional-007742) Calling .Close
I0920 21:06:26.227983   27325 main.go:141] libmachine: Successfully made call to close driver server
I0920 21:06:26.227998   27325 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-007742 image ls --format json --alsologtostderr:
[{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"278ceb0719a894f74a300bf14ba6513be22013b5523ffddf798180972b1e368e","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-007742"],"size":"30"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2
757a91c5bafa1aaea29d1aee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"eadbe5427012aa7646b800780d9a897be78e2b58ee350b8a93a8223f30dca05d","repoDigests":[],"repoTags":["localhost/my-image:functional-007742"],"size":"1240000"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["
registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-007742"],"size":"4940000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-007742 image ls --format json --alsologtostderr:
I0920 21:06:25.882946   27301 out.go:345] Setting OutFile to fd 1 ...
I0920 21:06:25.883186   27301 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 21:06:25.883195   27301 out.go:358] Setting ErrFile to fd 2...
I0920 21:06:25.883202   27301 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 21:06:25.883399   27301 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9629/.minikube/bin
I0920 21:06:25.883987   27301 config.go:182] Loaded profile config "functional-007742": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 21:06:25.884097   27301 config.go:182] Loaded profile config "functional-007742": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 21:06:25.884503   27301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0920 21:06:25.884557   27301 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 21:06:25.899341   27301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36635
I0920 21:06:25.899822   27301 main.go:141] libmachine: () Calling .GetVersion
I0920 21:06:25.900321   27301 main.go:141] libmachine: Using API Version  1
I0920 21:06:25.900344   27301 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 21:06:25.900722   27301 main.go:141] libmachine: () Calling .GetMachineName
I0920 21:06:25.900944   27301 main.go:141] libmachine: (functional-007742) Calling .GetState
I0920 21:06:25.902800   27301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0920 21:06:25.902840   27301 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 21:06:25.916779   27301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46767
I0920 21:06:25.917210   27301 main.go:141] libmachine: () Calling .GetVersion
I0920 21:06:25.917691   27301 main.go:141] libmachine: Using API Version  1
I0920 21:06:25.917716   27301 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 21:06:25.918048   27301 main.go:141] libmachine: () Calling .GetMachineName
I0920 21:06:25.918247   27301 main.go:141] libmachine: (functional-007742) Calling .DriverName
I0920 21:06:25.918425   27301 ssh_runner.go:195] Run: systemctl --version
I0920 21:06:25.918449   27301 main.go:141] libmachine: (functional-007742) Calling .GetSSHHostname
I0920 21:06:25.921218   27301 main.go:141] libmachine: (functional-007742) DBG | domain functional-007742 has defined MAC address 52:54:00:c5:06:bc in network mk-functional-007742
I0920 21:06:25.921619   27301 main.go:141] libmachine: (functional-007742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:06:bc", ip: ""} in network mk-functional-007742: {Iface:virbr1 ExpiryTime:2024-09-20 22:03:05 +0000 UTC Type:0 Mac:52:54:00:c5:06:bc Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:functional-007742 Clientid:01:52:54:00:c5:06:bc}
I0920 21:06:25.921652   27301 main.go:141] libmachine: (functional-007742) DBG | domain functional-007742 has defined IP address 192.168.39.231 and MAC address 52:54:00:c5:06:bc in network mk-functional-007742
I0920 21:06:25.921775   27301 main.go:141] libmachine: (functional-007742) Calling .GetSSHPort
I0920 21:06:25.921961   27301 main.go:141] libmachine: (functional-007742) Calling .GetSSHKeyPath
I0920 21:06:25.922116   27301 main.go:141] libmachine: (functional-007742) Calling .GetSSHUsername
I0920 21:06:25.922281   27301 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9629/.minikube/machines/functional-007742/id_rsa Username:docker}
I0920 21:06:25.994994   27301 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0920 21:06:26.021602   27301 main.go:141] libmachine: Making call to close driver server
I0920 21:06:26.021618   27301 main.go:141] libmachine: (functional-007742) Calling .Close
I0920 21:06:26.021920   27301 main.go:141] libmachine: Successfully made call to close driver server
I0920 21:06:26.021939   27301 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 21:06:26.021954   27301 main.go:141] libmachine: Making call to close driver server
I0920 21:06:26.021965   27301 main.go:141] libmachine: (functional-007742) Calling .Close
I0920 21:06:26.021944   27301 main.go:141] libmachine: (functional-007742) DBG | Closing plugin on server side
I0920 21:06:26.022174   27301 main.go:141] libmachine: Successfully made call to close driver server
I0920 21:06:26.022190   27301 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 21:06:26.022253   27301 main.go:141] libmachine: (functional-007742) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-007742 image ls --format yaml --alsologtostderr:
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 278ceb0719a894f74a300bf14ba6513be22013b5523ffddf798180972b1e368e
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-007742
size: "30"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-007742
size: "4940000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: eadbe5427012aa7646b800780d9a897be78e2b58ee350b8a93a8223f30dca05d
repoDigests: []
repoTags:
- localhost/my-image:functional-007742
size: "1240000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-007742 image ls --format yaml --alsologtostderr:
I0920 21:06:25.688471   27277 out.go:345] Setting OutFile to fd 1 ...
I0920 21:06:25.688593   27277 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 21:06:25.688601   27277 out.go:358] Setting ErrFile to fd 2...
I0920 21:06:25.688606   27277 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 21:06:25.688778   27277 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9629/.minikube/bin
I0920 21:06:25.689565   27277 config.go:182] Loaded profile config "functional-007742": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 21:06:25.689700   27277 config.go:182] Loaded profile config "functional-007742": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 21:06:25.690169   27277 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0920 21:06:25.690211   27277 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 21:06:25.704767   27277 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46679
I0920 21:06:25.705265   27277 main.go:141] libmachine: () Calling .GetVersion
I0920 21:06:25.705820   27277 main.go:141] libmachine: Using API Version  1
I0920 21:06:25.705846   27277 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 21:06:25.706186   27277 main.go:141] libmachine: () Calling .GetMachineName
I0920 21:06:25.706371   27277 main.go:141] libmachine: (functional-007742) Calling .GetState
I0920 21:06:25.708177   27277 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0920 21:06:25.708220   27277 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 21:06:25.726563   27277 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34831
I0920 21:06:25.727037   27277 main.go:141] libmachine: () Calling .GetVersion
I0920 21:06:25.727531   27277 main.go:141] libmachine: Using API Version  1
I0920 21:06:25.727555   27277 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 21:06:25.727922   27277 main.go:141] libmachine: () Calling .GetMachineName
I0920 21:06:25.728089   27277 main.go:141] libmachine: (functional-007742) Calling .DriverName
I0920 21:06:25.728241   27277 ssh_runner.go:195] Run: systemctl --version
I0920 21:06:25.728275   27277 main.go:141] libmachine: (functional-007742) Calling .GetSSHHostname
I0920 21:06:25.731200   27277 main.go:141] libmachine: (functional-007742) DBG | domain functional-007742 has defined MAC address 52:54:00:c5:06:bc in network mk-functional-007742
I0920 21:06:25.731673   27277 main.go:141] libmachine: (functional-007742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:06:bc", ip: ""} in network mk-functional-007742: {Iface:virbr1 ExpiryTime:2024-09-20 22:03:05 +0000 UTC Type:0 Mac:52:54:00:c5:06:bc Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:functional-007742 Clientid:01:52:54:00:c5:06:bc}
I0920 21:06:25.731704   27277 main.go:141] libmachine: (functional-007742) DBG | domain functional-007742 has defined IP address 192.168.39.231 and MAC address 52:54:00:c5:06:bc in network mk-functional-007742
I0920 21:06:25.731861   27277 main.go:141] libmachine: (functional-007742) Calling .GetSSHPort
I0920 21:06:25.732008   27277 main.go:141] libmachine: (functional-007742) Calling .GetSSHKeyPath
I0920 21:06:25.732150   27277 main.go:141] libmachine: (functional-007742) Calling .GetSSHUsername
I0920 21:06:25.732278   27277 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9629/.minikube/machines/functional-007742/id_rsa Username:docker}
I0920 21:06:25.808165   27277 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0920 21:06:25.834026   27277 main.go:141] libmachine: Making call to close driver server
I0920 21:06:25.834036   27277 main.go:141] libmachine: (functional-007742) Calling .Close
I0920 21:06:25.834258   27277 main.go:141] libmachine: (functional-007742) DBG | Closing plugin on server side
I0920 21:06:25.834299   27277 main.go:141] libmachine: Successfully made call to close driver server
I0920 21:06:25.834309   27277 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 21:06:25.834320   27277 main.go:141] libmachine: Making call to close driver server
I0920 21:06:25.834328   27277 main.go:141] libmachine: (functional-007742) Calling .Close
I0920 21:06:25.834552   27277 main.go:141] libmachine: Successfully made call to close driver server
I0920 21:06:25.834570   27277 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 21:06:25.834584   27277 main.go:141] libmachine: (functional-007742) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-007742 ssh pgrep buildkitd: exit status 1 (186.037692ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 image build -t localhost/my-image:functional-007742 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-007742 image build -t localhost/my-image:functional-007742 testdata/build --alsologtostderr: (2.323837784s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-007742 image build -t localhost/my-image:functional-007742 testdata/build --alsologtostderr:
I0920 21:06:23.166152   27230 out.go:345] Setting OutFile to fd 1 ...
I0920 21:06:23.166296   27230 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 21:06:23.166305   27230 out.go:358] Setting ErrFile to fd 2...
I0920 21:06:23.166309   27230 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 21:06:23.166529   27230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9629/.minikube/bin
I0920 21:06:23.167152   27230 config.go:182] Loaded profile config "functional-007742": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 21:06:23.167732   27230 config.go:182] Loaded profile config "functional-007742": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 21:06:23.168169   27230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0920 21:06:23.168228   27230 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 21:06:23.183548   27230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40895
I0920 21:06:23.184113   27230 main.go:141] libmachine: () Calling .GetVersion
I0920 21:06:23.184757   27230 main.go:141] libmachine: Using API Version  1
I0920 21:06:23.184783   27230 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 21:06:23.185215   27230 main.go:141] libmachine: () Calling .GetMachineName
I0920 21:06:23.185407   27230 main.go:141] libmachine: (functional-007742) Calling .GetState
I0920 21:06:23.187337   27230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0920 21:06:23.187374   27230 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 21:06:23.202189   27230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45323
I0920 21:06:23.202658   27230 main.go:141] libmachine: () Calling .GetVersion
I0920 21:06:23.203082   27230 main.go:141] libmachine: Using API Version  1
I0920 21:06:23.203101   27230 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 21:06:23.203440   27230 main.go:141] libmachine: () Calling .GetMachineName
I0920 21:06:23.203635   27230 main.go:141] libmachine: (functional-007742) Calling .DriverName
I0920 21:06:23.203843   27230 ssh_runner.go:195] Run: systemctl --version
I0920 21:06:23.203879   27230 main.go:141] libmachine: (functional-007742) Calling .GetSSHHostname
I0920 21:06:23.206723   27230 main.go:141] libmachine: (functional-007742) DBG | domain functional-007742 has defined MAC address 52:54:00:c5:06:bc in network mk-functional-007742
I0920 21:06:23.207166   27230 main.go:141] libmachine: (functional-007742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:06:bc", ip: ""} in network mk-functional-007742: {Iface:virbr1 ExpiryTime:2024-09-20 22:03:05 +0000 UTC Type:0 Mac:52:54:00:c5:06:bc Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:functional-007742 Clientid:01:52:54:00:c5:06:bc}
I0920 21:06:23.207196   27230 main.go:141] libmachine: (functional-007742) DBG | domain functional-007742 has defined IP address 192.168.39.231 and MAC address 52:54:00:c5:06:bc in network mk-functional-007742
I0920 21:06:23.207229   27230 main.go:141] libmachine: (functional-007742) Calling .GetSSHPort
I0920 21:06:23.207379   27230 main.go:141] libmachine: (functional-007742) Calling .GetSSHKeyPath
I0920 21:06:23.207481   27230 main.go:141] libmachine: (functional-007742) Calling .GetSSHUsername
I0920 21:06:23.207609   27230 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9629/.minikube/machines/functional-007742/id_rsa Username:docker}
I0920 21:06:23.284170   27230 build_images.go:161] Building image from path: /tmp/build.2824612842.tar
I0920 21:06:23.284237   27230 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0920 21:06:23.295379   27230 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2824612842.tar
I0920 21:06:23.300134   27230 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2824612842.tar: stat -c "%s %y" /var/lib/minikube/build/build.2824612842.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2824612842.tar': No such file or directory
I0920 21:06:23.300176   27230 ssh_runner.go:362] scp /tmp/build.2824612842.tar --> /var/lib/minikube/build/build.2824612842.tar (3072 bytes)
I0920 21:06:23.339322   27230 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2824612842
I0920 21:06:23.353413   27230 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2824612842 -xf /var/lib/minikube/build/build.2824612842.tar
I0920 21:06:23.367810   27230 docker.go:360] Building image: /var/lib/minikube/build/build.2824612842
I0920 21:06:23.367894   27230 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-007742 /var/lib/minikube/build/build.2824612842
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.1s done
#8 writing image sha256:eadbe5427012aa7646b800780d9a897be78e2b58ee350b8a93a8223f30dca05d done
#8 naming to localhost/my-image:functional-007742 done
#8 DONE 0.1s
I0920 21:06:25.416778   27230 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-007742 /var/lib/minikube/build/build.2824612842: (2.048854677s)
I0920 21:06:25.416855   27230 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2824612842
I0920 21:06:25.429542   27230 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2824612842.tar
I0920 21:06:25.443761   27230 build_images.go:217] Built localhost/my-image:functional-007742 from /tmp/build.2824612842.tar
I0920 21:06:25.443792   27230 build_images.go:133] succeeded building to: functional-007742
I0920 21:06:25.443797   27230 build_images.go:134] failed building to: 
I0920 21:06:25.443845   27230 main.go:141] libmachine: Making call to close driver server
I0920 21:06:25.443860   27230 main.go:141] libmachine: (functional-007742) Calling .Close
I0920 21:06:25.444106   27230 main.go:141] libmachine: Successfully made call to close driver server
I0920 21:06:25.444130   27230 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 21:06:25.444138   27230 main.go:141] libmachine: Making call to close driver server
I0920 21:06:25.444144   27230 main.go:141] libmachine: (functional-007742) Calling .Close
I0920 21:06:25.444367   27230 main.go:141] libmachine: Successfully made call to close driver server
I0920 21:06:25.444378   27230 main.go:141] libmachine: (functional-007742) DBG | Closing plugin on server side
I0920 21:06:25.444385   27230 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.004893923s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-007742
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "312.862459ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "53.228008ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "264.930144ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "48.048306ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 image load --daemon kicbase/echo-server:functional-007742 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-007742 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-007742 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-l2k6v" [71d19fe0-ba38-4c8b-a22f-851868add211] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-l2k6v" [71d19fe0-ba38-4c8b-a22f-851868add211] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.003932511s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 image load --daemon kicbase/echo-server:functional-007742 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-007742
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 image load --daemon kicbase/echo-server:functional-007742 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 image save kicbase/echo-server:functional-007742 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 image rm kicbase/echo-server:functional-007742 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-007742
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 image save --daemon kicbase/echo-server:functional-007742 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-007742
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 update-context --alsologtostderr -v=2
E0920 21:06:32.649673   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:06:32.656063   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:06:32.667438   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:06:32.688845   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:06:32.730267   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:06:32.812079   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:06:32.973573   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:06:33.295300   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:06:33.937571   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-007742 /tmp/TestFunctionalparallelMountCmdany-port1088975687/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726866368474074628" to /tmp/TestFunctionalparallelMountCmdany-port1088975687/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726866368474074628" to /tmp/TestFunctionalparallelMountCmdany-port1088975687/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726866368474074628" to /tmp/TestFunctionalparallelMountCmdany-port1088975687/001/test-1726866368474074628
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-007742 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (179.073941ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 21:06:08.653436   16802 retry.go:31] will retry after 442.465996ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 20 21:06 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 20 21:06 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 20 21:06 test-1726866368474074628
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 ssh cat /mount-9p/test-1726866368474074628
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-007742 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [4864484f-ae2a-43a7-8277-e3eaeca42014] Pending
helpers_test.go:344: "busybox-mount" [4864484f-ae2a-43a7-8277-e3eaeca42014] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [4864484f-ae2a-43a7-8277-e3eaeca42014] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [4864484f-ae2a-43a7-8277-e3eaeca42014] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.00318898s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-007742 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-007742 /tmp/TestFunctionalparallelMountCmdany-port1088975687/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 service list -o json
functional_test.go:1494: Took "249.213226ms" to run "out/minikube-linux-amd64 -p functional-007742 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.231:31893
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.231:31893
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-007742 /tmp/TestFunctionalparallelMountCmdspecific-port1488856589/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-007742 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (208.678106ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 21:06:19.968591   16802 retry.go:31] will retry after 388.006762ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-007742 /tmp/TestFunctionalparallelMountCmdspecific-port1488856589/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-007742 ssh "sudo umount -f /mount-9p": exit status 1 (199.741406ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-007742 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-007742 /tmp/TestFunctionalparallelMountCmdspecific-port1488856589/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-007742 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2214690719/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-007742 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2214690719/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-007742 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2214690719/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-007742 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-007742 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-007742 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2214690719/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-007742 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2214690719/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-007742 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2214690719/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.72s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-007742
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-007742
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-007742
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestGvisorAddon (214.04s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-923979 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-923979 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m8.663216548s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-923979 cache add gcr.io/k8s-minikube/gvisor-addon:2
E0920 21:53:28.229884   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/skaffold-267453/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:53:28.236370   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/skaffold-267453/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:53:28.247799   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/skaffold-267453/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:53:28.269184   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/skaffold-267453/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:53:28.310627   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/skaffold-267453/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:53:28.392088   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/skaffold-267453/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:53:28.553641   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/skaffold-267453/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:53:28.875388   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/skaffold-267453/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:53:29.517481   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/skaffold-267453/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:53:30.799177   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/skaffold-267453/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:53:33.361609   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/skaffold-267453/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:53:38.483968   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/skaffold-267453/client.crt: no such file or directory" logger="UnhandledError"
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-923979 cache add gcr.io/k8s-minikube/gvisor-addon:2: (22.749009893s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-923979 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-923979 addons enable gvisor: (2.614999188s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [d5671f56-e6fc-4f2e-b827-d6e085b5fa5e] Running
E0920 21:53:48.725798   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/skaffold-267453/client.crt: no such file or directory" logger="UnhandledError"
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.004959828s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-923979 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [260bb064-82ac-4275-b457-9f3d61c0d83f] Pending
helpers_test.go:344: "nginx-gvisor" [260bb064-82ac-4275-b457-9f3d61c0d83f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [260bb064-82ac-4275-b457-9f3d61c0d83f] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 25.004888781s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-923979
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-923979: (2.295674407s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-923979 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-923979 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m13.372750861s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [d5671f56-e6fc-4f2e-b827-d6e085b5fa5e] Running
helpers_test.go:344: "gvisor" [d5671f56-e6fc-4f2e-b827-d6e085b5fa5e] Running / Ready:ContainersNotReady (containers with unready status: [gvisor]) / ContainersReady:ContainersNotReady (containers with unready status: [gvisor])
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.003685567s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [260bb064-82ac-4275-b457-9f3d61c0d83f] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 6.004029232s
helpers_test.go:175: Cleaning up "gvisor-923979" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-923979
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-923979: (1.131207138s)
--- PASS: TestGvisorAddon (214.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (213.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-137997 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 
E0920 21:06:53.145546   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:07:13.627170   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:07:54.589746   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:09:16.512111   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-137997 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 : (3m33.055584256s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (213.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-137997 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-137997 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-137997 -- rollout status deployment/busybox: (2.470824292s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-137997 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-137997 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-137997 -- exec busybox-7dff88458-bkwwx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-137997 -- exec busybox-7dff88458-cvfgx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-137997 -- exec busybox-7dff88458-sbsdm -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-137997 -- exec busybox-7dff88458-bkwwx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-137997 -- exec busybox-7dff88458-cvfgx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-137997 -- exec busybox-7dff88458-sbsdm -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-137997 -- exec busybox-7dff88458-bkwwx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-137997 -- exec busybox-7dff88458-cvfgx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-137997 -- exec busybox-7dff88458-sbsdm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-137997 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-137997 -- exec busybox-7dff88458-bkwwx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-137997 -- exec busybox-7dff88458-bkwwx -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-137997 -- exec busybox-7dff88458-cvfgx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-137997 -- exec busybox-7dff88458-cvfgx -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-137997 -- exec busybox-7dff88458-sbsdm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-137997 -- exec busybox-7dff88458-sbsdm -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (63.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-137997 -v=7 --alsologtostderr
E0920 21:11:00.962657   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/functional-007742/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:11:00.969107   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/functional-007742/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:11:00.980469   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/functional-007742/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:11:01.002044   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/functional-007742/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:11:01.043535   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/functional-007742/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:11:01.124937   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/functional-007742/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:11:01.286465   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/functional-007742/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:11:01.608689   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/functional-007742/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:11:02.250333   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/functional-007742/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:11:03.532202   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/functional-007742/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:11:06.094355   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/functional-007742/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:11:11.216541   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/functional-007742/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:11:21.458388   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/functional-007742/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-137997 -v=7 --alsologtostderr: (1m3.037412443s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (63.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-137997 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0920 21:11:32.649739   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 cp testdata/cp-test.txt ha-137997:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 ssh -n ha-137997 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 cp ha-137997:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1114107906/001/cp-test_ha-137997.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 ssh -n ha-137997 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 cp ha-137997:/home/docker/cp-test.txt ha-137997-m02:/home/docker/cp-test_ha-137997_ha-137997-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 ssh -n ha-137997 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 ssh -n ha-137997-m02 "sudo cat /home/docker/cp-test_ha-137997_ha-137997-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 cp ha-137997:/home/docker/cp-test.txt ha-137997-m03:/home/docker/cp-test_ha-137997_ha-137997-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 ssh -n ha-137997 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 ssh -n ha-137997-m03 "sudo cat /home/docker/cp-test_ha-137997_ha-137997-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 cp ha-137997:/home/docker/cp-test.txt ha-137997-m04:/home/docker/cp-test_ha-137997_ha-137997-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 ssh -n ha-137997 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 ssh -n ha-137997-m04 "sudo cat /home/docker/cp-test_ha-137997_ha-137997-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 cp testdata/cp-test.txt ha-137997-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 ssh -n ha-137997-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 cp ha-137997-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1114107906/001/cp-test_ha-137997-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 ssh -n ha-137997-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 cp ha-137997-m02:/home/docker/cp-test.txt ha-137997:/home/docker/cp-test_ha-137997-m02_ha-137997.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 ssh -n ha-137997-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 ssh -n ha-137997 "sudo cat /home/docker/cp-test_ha-137997-m02_ha-137997.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 cp ha-137997-m02:/home/docker/cp-test.txt ha-137997-m03:/home/docker/cp-test_ha-137997-m02_ha-137997-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 ssh -n ha-137997-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 ssh -n ha-137997-m03 "sudo cat /home/docker/cp-test_ha-137997-m02_ha-137997-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 cp ha-137997-m02:/home/docker/cp-test.txt ha-137997-m04:/home/docker/cp-test_ha-137997-m02_ha-137997-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 ssh -n ha-137997-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 ssh -n ha-137997-m04 "sudo cat /home/docker/cp-test_ha-137997-m02_ha-137997-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 cp testdata/cp-test.txt ha-137997-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 ssh -n ha-137997-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 cp ha-137997-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1114107906/001/cp-test_ha-137997-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 ssh -n ha-137997-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 cp ha-137997-m03:/home/docker/cp-test.txt ha-137997:/home/docker/cp-test_ha-137997-m03_ha-137997.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 ssh -n ha-137997-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 ssh -n ha-137997 "sudo cat /home/docker/cp-test_ha-137997-m03_ha-137997.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 cp ha-137997-m03:/home/docker/cp-test.txt ha-137997-m02:/home/docker/cp-test_ha-137997-m03_ha-137997-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 ssh -n ha-137997-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 ssh -n ha-137997-m02 "sudo cat /home/docker/cp-test_ha-137997-m03_ha-137997-m02.txt"
E0920 21:11:41.939717   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/functional-007742/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 cp ha-137997-m03:/home/docker/cp-test.txt ha-137997-m04:/home/docker/cp-test_ha-137997-m03_ha-137997-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 ssh -n ha-137997-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 ssh -n ha-137997-m04 "sudo cat /home/docker/cp-test_ha-137997-m03_ha-137997-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 cp testdata/cp-test.txt ha-137997-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 ssh -n ha-137997-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 cp ha-137997-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1114107906/001/cp-test_ha-137997-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 ssh -n ha-137997-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 cp ha-137997-m04:/home/docker/cp-test.txt ha-137997:/home/docker/cp-test_ha-137997-m04_ha-137997.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 ssh -n ha-137997-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 ssh -n ha-137997 "sudo cat /home/docker/cp-test_ha-137997-m04_ha-137997.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 cp ha-137997-m04:/home/docker/cp-test.txt ha-137997-m02:/home/docker/cp-test_ha-137997-m04_ha-137997-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 ssh -n ha-137997-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 ssh -n ha-137997-m02 "sudo cat /home/docker/cp-test_ha-137997-m04_ha-137997-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 cp ha-137997-m04:/home/docker/cp-test.txt ha-137997-m03:/home/docker/cp-test_ha-137997-m04_ha-137997-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 ssh -n ha-137997-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 ssh -n ha-137997-m03 "sudo cat /home/docker/cp-test_ha-137997-m04_ha-137997-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-137997 node stop m02 -v=7 --alsologtostderr: (13.305226943s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-137997 status -v=7 --alsologtostderr: exit status 7 (613.496794ms)

                                                
                                                
-- stdout --
	ha-137997
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-137997-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-137997-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-137997-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 21:11:59.160656   31852 out.go:345] Setting OutFile to fd 1 ...
	I0920 21:11:59.160895   31852 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:11:59.160904   31852 out.go:358] Setting ErrFile to fd 2...
	I0920 21:11:59.160908   31852 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:11:59.161056   31852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9629/.minikube/bin
	I0920 21:11:59.161210   31852 out.go:352] Setting JSON to false
	I0920 21:11:59.161240   31852 mustload.go:65] Loading cluster: ha-137997
	I0920 21:11:59.161290   31852 notify.go:220] Checking for updates...
	I0920 21:11:59.161788   31852 config.go:182] Loaded profile config "ha-137997": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 21:11:59.161814   31852 status.go:174] checking status of ha-137997 ...
	I0920 21:11:59.162253   31852 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 21:11:59.162306   31852 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:11:59.181895   31852 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33759
	I0920 21:11:59.182352   31852 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:11:59.182981   31852 main.go:141] libmachine: Using API Version  1
	I0920 21:11:59.183000   31852 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:11:59.183351   31852 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:11:59.183533   31852 main.go:141] libmachine: (ha-137997) Calling .GetState
	I0920 21:11:59.185400   31852 status.go:364] ha-137997 host status = "Running" (err=<nil>)
	I0920 21:11:59.185420   31852 host.go:66] Checking if "ha-137997" exists ...
	I0920 21:11:59.185880   31852 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 21:11:59.185931   31852 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:11:59.200252   31852 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35001
	I0920 21:11:59.200596   31852 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:11:59.200983   31852 main.go:141] libmachine: Using API Version  1
	I0920 21:11:59.201002   31852 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:11:59.201291   31852 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:11:59.201453   31852 main.go:141] libmachine: (ha-137997) Calling .GetIP
	I0920 21:11:59.204283   31852 main.go:141] libmachine: (ha-137997) DBG | domain ha-137997 has defined MAC address 52:54:00:23:71:35 in network mk-ha-137997
	I0920 21:11:59.204740   31852 main.go:141] libmachine: (ha-137997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:71:35", ip: ""} in network mk-ha-137997: {Iface:virbr1 ExpiryTime:2024-09-20 22:07:03 +0000 UTC Type:0 Mac:52:54:00:23:71:35 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-137997 Clientid:01:52:54:00:23:71:35}
	I0920 21:11:59.204767   31852 main.go:141] libmachine: (ha-137997) DBG | domain ha-137997 has defined IP address 192.168.39.107 and MAC address 52:54:00:23:71:35 in network mk-ha-137997
	I0920 21:11:59.204928   31852 host.go:66] Checking if "ha-137997" exists ...
	I0920 21:11:59.205200   31852 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 21:11:59.205232   31852 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:11:59.219164   31852 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34215
	I0920 21:11:59.219627   31852 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:11:59.220133   31852 main.go:141] libmachine: Using API Version  1
	I0920 21:11:59.220150   31852 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:11:59.220467   31852 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:11:59.220664   31852 main.go:141] libmachine: (ha-137997) Calling .DriverName
	I0920 21:11:59.220894   31852 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 21:11:59.220931   31852 main.go:141] libmachine: (ha-137997) Calling .GetSSHHostname
	I0920 21:11:59.223717   31852 main.go:141] libmachine: (ha-137997) DBG | domain ha-137997 has defined MAC address 52:54:00:23:71:35 in network mk-ha-137997
	I0920 21:11:59.224180   31852 main.go:141] libmachine: (ha-137997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:71:35", ip: ""} in network mk-ha-137997: {Iface:virbr1 ExpiryTime:2024-09-20 22:07:03 +0000 UTC Type:0 Mac:52:54:00:23:71:35 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-137997 Clientid:01:52:54:00:23:71:35}
	I0920 21:11:59.224213   31852 main.go:141] libmachine: (ha-137997) DBG | domain ha-137997 has defined IP address 192.168.39.107 and MAC address 52:54:00:23:71:35 in network mk-ha-137997
	I0920 21:11:59.224315   31852 main.go:141] libmachine: (ha-137997) Calling .GetSSHPort
	I0920 21:11:59.224464   31852 main.go:141] libmachine: (ha-137997) Calling .GetSSHKeyPath
	I0920 21:11:59.224603   31852 main.go:141] libmachine: (ha-137997) Calling .GetSSHUsername
	I0920 21:11:59.224738   31852 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9629/.minikube/machines/ha-137997/id_rsa Username:docker}
	I0920 21:11:59.304753   31852 ssh_runner.go:195] Run: systemctl --version
	I0920 21:11:59.310769   31852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 21:11:59.324760   31852 kubeconfig.go:125] found "ha-137997" server: "https://192.168.39.254:8443"
	I0920 21:11:59.324787   31852 api_server.go:166] Checking apiserver status ...
	I0920 21:11:59.324816   31852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 21:11:59.338363   31852 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1938/cgroup
	W0920 21:11:59.347095   31852 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1938/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0920 21:11:59.347138   31852 ssh_runner.go:195] Run: ls
	I0920 21:11:59.351224   31852 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0920 21:11:59.357329   31852 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0920 21:11:59.357353   31852 status.go:456] ha-137997 apiserver status = Running (err=<nil>)
	I0920 21:11:59.357365   31852 status.go:176] ha-137997 status: &{Name:ha-137997 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 21:11:59.357383   31852 status.go:174] checking status of ha-137997-m02 ...
	I0920 21:11:59.357738   31852 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 21:11:59.357771   31852 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:11:59.372643   31852 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40075
	I0920 21:11:59.372989   31852 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:11:59.373517   31852 main.go:141] libmachine: Using API Version  1
	I0920 21:11:59.373540   31852 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:11:59.373907   31852 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:11:59.374099   31852 main.go:141] libmachine: (ha-137997-m02) Calling .GetState
	I0920 21:11:59.375472   31852 status.go:364] ha-137997-m02 host status = "Stopped" (err=<nil>)
	I0920 21:11:59.375483   31852 status.go:377] host is not running, skipping remaining checks
	I0920 21:11:59.375488   31852 status.go:176] ha-137997-m02 status: &{Name:ha-137997-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 21:11:59.375512   31852 status.go:174] checking status of ha-137997-m03 ...
	I0920 21:11:59.375773   31852 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 21:11:59.375803   31852 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:11:59.390450   31852 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37483
	I0920 21:11:59.390918   31852 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:11:59.391340   31852 main.go:141] libmachine: Using API Version  1
	I0920 21:11:59.391361   31852 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:11:59.391653   31852 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:11:59.391796   31852 main.go:141] libmachine: (ha-137997-m03) Calling .GetState
	I0920 21:11:59.393494   31852 status.go:364] ha-137997-m03 host status = "Running" (err=<nil>)
	I0920 21:11:59.393509   31852 host.go:66] Checking if "ha-137997-m03" exists ...
	I0920 21:11:59.393774   31852 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 21:11:59.393809   31852 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:11:59.409050   31852 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37853
	I0920 21:11:59.409450   31852 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:11:59.409951   31852 main.go:141] libmachine: Using API Version  1
	I0920 21:11:59.409975   31852 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:11:59.410287   31852 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:11:59.410456   31852 main.go:141] libmachine: (ha-137997-m03) Calling .GetIP
	I0920 21:11:59.413125   31852 main.go:141] libmachine: (ha-137997-m03) DBG | domain ha-137997-m03 has defined MAC address 52:54:00:a6:1a:8f in network mk-ha-137997
	I0920 21:11:59.413626   31852 main.go:141] libmachine: (ha-137997-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:1a:8f", ip: ""} in network mk-ha-137997: {Iface:virbr1 ExpiryTime:2024-09-20 22:09:15 +0000 UTC Type:0 Mac:52:54:00:a6:1a:8f Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-137997-m03 Clientid:01:52:54:00:a6:1a:8f}
	I0920 21:11:59.413650   31852 main.go:141] libmachine: (ha-137997-m03) DBG | domain ha-137997-m03 has defined IP address 192.168.39.191 and MAC address 52:54:00:a6:1a:8f in network mk-ha-137997
	I0920 21:11:59.413882   31852 host.go:66] Checking if "ha-137997-m03" exists ...
	I0920 21:11:59.414197   31852 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 21:11:59.414234   31852 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:11:59.429339   31852 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41881
	I0920 21:11:59.429817   31852 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:11:59.430267   31852 main.go:141] libmachine: Using API Version  1
	I0920 21:11:59.430290   31852 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:11:59.430592   31852 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:11:59.430730   31852 main.go:141] libmachine: (ha-137997-m03) Calling .DriverName
	I0920 21:11:59.430912   31852 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 21:11:59.430931   31852 main.go:141] libmachine: (ha-137997-m03) Calling .GetSSHHostname
	I0920 21:11:59.433516   31852 main.go:141] libmachine: (ha-137997-m03) DBG | domain ha-137997-m03 has defined MAC address 52:54:00:a6:1a:8f in network mk-ha-137997
	I0920 21:11:59.433961   31852 main.go:141] libmachine: (ha-137997-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:1a:8f", ip: ""} in network mk-ha-137997: {Iface:virbr1 ExpiryTime:2024-09-20 22:09:15 +0000 UTC Type:0 Mac:52:54:00:a6:1a:8f Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-137997-m03 Clientid:01:52:54:00:a6:1a:8f}
	I0920 21:11:59.433981   31852 main.go:141] libmachine: (ha-137997-m03) DBG | domain ha-137997-m03 has defined IP address 192.168.39.191 and MAC address 52:54:00:a6:1a:8f in network mk-ha-137997
	I0920 21:11:59.434067   31852 main.go:141] libmachine: (ha-137997-m03) Calling .GetSSHPort
	I0920 21:11:59.434285   31852 main.go:141] libmachine: (ha-137997-m03) Calling .GetSSHKeyPath
	I0920 21:11:59.434416   31852 main.go:141] libmachine: (ha-137997-m03) Calling .GetSSHUsername
	I0920 21:11:59.434598   31852 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9629/.minikube/machines/ha-137997-m03/id_rsa Username:docker}
	I0920 21:11:59.518897   31852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 21:11:59.535895   31852 kubeconfig.go:125] found "ha-137997" server: "https://192.168.39.254:8443"
	I0920 21:11:59.535924   31852 api_server.go:166] Checking apiserver status ...
	I0920 21:11:59.535961   31852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 21:11:59.550939   31852 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1799/cgroup
	W0920 21:11:59.561411   31852 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1799/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0920 21:11:59.561449   31852 ssh_runner.go:195] Run: ls
	I0920 21:11:59.565571   31852 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0920 21:11:59.570083   31852 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0920 21:11:59.570102   31852 status.go:456] ha-137997-m03 apiserver status = Running (err=<nil>)
	I0920 21:11:59.570109   31852 status.go:176] ha-137997-m03 status: &{Name:ha-137997-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 21:11:59.570128   31852 status.go:174] checking status of ha-137997-m04 ...
	I0920 21:11:59.570410   31852 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 21:11:59.570440   31852 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:11:59.586194   31852 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46451
	I0920 21:11:59.586648   31852 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:11:59.587164   31852 main.go:141] libmachine: Using API Version  1
	I0920 21:11:59.587188   31852 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:11:59.587504   31852 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:11:59.587703   31852 main.go:141] libmachine: (ha-137997-m04) Calling .GetState
	I0920 21:11:59.589293   31852 status.go:364] ha-137997-m04 host status = "Running" (err=<nil>)
	I0920 21:11:59.589309   31852 host.go:66] Checking if "ha-137997-m04" exists ...
	I0920 21:11:59.589572   31852 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 21:11:59.589608   31852 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:11:59.604443   31852 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33417
	I0920 21:11:59.604872   31852 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:11:59.605346   31852 main.go:141] libmachine: Using API Version  1
	I0920 21:11:59.605367   31852 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:11:59.605756   31852 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:11:59.605930   31852 main.go:141] libmachine: (ha-137997-m04) Calling .GetIP
	I0920 21:11:59.608557   31852 main.go:141] libmachine: (ha-137997-m04) DBG | domain ha-137997-m04 has defined MAC address 52:54:00:52:a0:70 in network mk-ha-137997
	I0920 21:11:59.608955   31852 main.go:141] libmachine: (ha-137997-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:a0:70", ip: ""} in network mk-ha-137997: {Iface:virbr1 ExpiryTime:2024-09-20 22:10:43 +0000 UTC Type:0 Mac:52:54:00:52:a0:70 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-137997-m04 Clientid:01:52:54:00:52:a0:70}
	I0920 21:11:59.608982   31852 main.go:141] libmachine: (ha-137997-m04) DBG | domain ha-137997-m04 has defined IP address 192.168.39.162 and MAC address 52:54:00:52:a0:70 in network mk-ha-137997
	I0920 21:11:59.609119   31852 host.go:66] Checking if "ha-137997-m04" exists ...
	I0920 21:11:59.609533   31852 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 21:11:59.609576   31852 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:11:59.628656   31852 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44439
	I0920 21:11:59.629132   31852 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:11:59.629680   31852 main.go:141] libmachine: Using API Version  1
	I0920 21:11:59.629708   31852 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:11:59.630077   31852 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:11:59.630305   31852 main.go:141] libmachine: (ha-137997-m04) Calling .DriverName
	I0920 21:11:59.630522   31852 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 21:11:59.630555   31852 main.go:141] libmachine: (ha-137997-m04) Calling .GetSSHHostname
	I0920 21:11:59.633742   31852 main.go:141] libmachine: (ha-137997-m04) DBG | domain ha-137997-m04 has defined MAC address 52:54:00:52:a0:70 in network mk-ha-137997
	I0920 21:11:59.634202   31852 main.go:141] libmachine: (ha-137997-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:a0:70", ip: ""} in network mk-ha-137997: {Iface:virbr1 ExpiryTime:2024-09-20 22:10:43 +0000 UTC Type:0 Mac:52:54:00:52:a0:70 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-137997-m04 Clientid:01:52:54:00:52:a0:70}
	I0920 21:11:59.634236   31852 main.go:141] libmachine: (ha-137997-m04) DBG | domain ha-137997-m04 has defined IP address 192.168.39.162 and MAC address 52:54:00:52:a0:70 in network mk-ha-137997
	I0920 21:11:59.634373   31852 main.go:141] libmachine: (ha-137997-m04) Calling .GetSSHPort
	I0920 21:11:59.634515   31852 main.go:141] libmachine: (ha-137997-m04) Calling .GetSSHKeyPath
	I0920 21:11:59.634658   31852 main.go:141] libmachine: (ha-137997-m04) Calling .GetSSHUsername
	I0920 21:11:59.634779   31852 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9629/.minikube/machines/ha-137997-m04/id_rsa Username:docker}
	I0920 21:11:59.719879   31852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 21:11:59.733926   31852 status.go:176] ha-137997-m04 status: &{Name:ha-137997-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0920 21:12:00.353990   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (43.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 node start m02 -v=7 --alsologtostderr
E0920 21:12:22.901714   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/functional-007742/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-137997 node start m02 -v=7 --alsologtostderr: (42.389326914s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (43.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (295.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-137997 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-137997 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-137997 -v=7 --alsologtostderr: (41.575492238s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-137997 --wait=true -v=7 --alsologtostderr
E0920 21:13:44.823923   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/functional-007742/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:16:00.962722   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/functional-007742/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:16:28.666468   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/functional-007742/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:16:32.650406   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-137997 --wait=true -v=7 --alsologtostderr: (4m13.9138606s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-137997
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (295.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-137997 node delete m03 -v=7 --alsologtostderr: (6.554903008s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (39.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-137997 stop -v=7 --alsologtostderr: (38.974578107s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-137997 status -v=7 --alsologtostderr: exit status 7 (98.566602ms)

                                                
                                                
-- stdout --
	ha-137997
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-137997-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-137997-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 21:18:27.027255   34922 out.go:345] Setting OutFile to fd 1 ...
	I0920 21:18:27.027362   34922 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:18:27.027371   34922 out.go:358] Setting ErrFile to fd 2...
	I0920 21:18:27.027376   34922 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:18:27.027573   34922 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9629/.minikube/bin
	I0920 21:18:27.027743   34922 out.go:352] Setting JSON to false
	I0920 21:18:27.027773   34922 mustload.go:65] Loading cluster: ha-137997
	I0920 21:18:27.027838   34922 notify.go:220] Checking for updates...
	I0920 21:18:27.028156   34922 config.go:182] Loaded profile config "ha-137997": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 21:18:27.028174   34922 status.go:174] checking status of ha-137997 ...
	I0920 21:18:27.028585   34922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 21:18:27.028664   34922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:18:27.047542   34922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40673
	I0920 21:18:27.047978   34922 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:18:27.048530   34922 main.go:141] libmachine: Using API Version  1
	I0920 21:18:27.048564   34922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:18:27.048859   34922 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:18:27.049028   34922 main.go:141] libmachine: (ha-137997) Calling .GetState
	I0920 21:18:27.050437   34922 status.go:364] ha-137997 host status = "Stopped" (err=<nil>)
	I0920 21:18:27.050453   34922 status.go:377] host is not running, skipping remaining checks
	I0920 21:18:27.050459   34922 status.go:176] ha-137997 status: &{Name:ha-137997 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 21:18:27.050495   34922 status.go:174] checking status of ha-137997-m02 ...
	I0920 21:18:27.050760   34922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 21:18:27.050795   34922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:18:27.064884   34922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37867
	I0920 21:18:27.065299   34922 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:18:27.065702   34922 main.go:141] libmachine: Using API Version  1
	I0920 21:18:27.065729   34922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:18:27.066026   34922 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:18:27.066193   34922 main.go:141] libmachine: (ha-137997-m02) Calling .GetState
	I0920 21:18:27.067599   34922 status.go:364] ha-137997-m02 host status = "Stopped" (err=<nil>)
	I0920 21:18:27.067611   34922 status.go:377] host is not running, skipping remaining checks
	I0920 21:18:27.067617   34922 status.go:176] ha-137997-m02 status: &{Name:ha-137997-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 21:18:27.067646   34922 status.go:174] checking status of ha-137997-m04 ...
	I0920 21:18:27.067914   34922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 21:18:27.067943   34922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:18:27.081963   34922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36273
	I0920 21:18:27.082356   34922 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:18:27.082818   34922 main.go:141] libmachine: Using API Version  1
	I0920 21:18:27.082837   34922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:18:27.083107   34922 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:18:27.083277   34922 main.go:141] libmachine: (ha-137997-m04) Calling .GetState
	I0920 21:18:27.084669   34922 status.go:364] ha-137997-m04 host status = "Stopped" (err=<nil>)
	I0920 21:18:27.084688   34922 status.go:377] host is not running, skipping remaining checks
	I0920 21:18:27.084708   34922 status.go:176] ha-137997-m04 status: &{Name:ha-137997-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (39.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (125.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-137997 --wait=true -v=7 --alsologtostderr --driver=kvm2 
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-137997 --wait=true -v=7 --alsologtostderr --driver=kvm2 : (2m5.101707096s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (125.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (85.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-137997 --control-plane -v=7 --alsologtostderr
E0920 21:21:00.962748   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/functional-007742/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:21:32.649764   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-137997 --control-plane -v=7 --alsologtostderr: (1m24.377471654s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-137997 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (85.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.83s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (48.9s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-022601 --driver=kvm2 
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-022601 --driver=kvm2 : (48.899582451s)
--- PASS: TestImageBuild/serial/Setup (48.90s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.47s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-022601
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-022601: (1.466436097s)
--- PASS: TestImageBuild/serial/NormalBuild (1.47s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.94s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-022601
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.94s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.63s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-022601
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.63s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.87s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-022601
E0920 21:22:55.715708   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.87s)

                                                
                                    
x
+
TestJSONOutput/start/Command (207.65s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-318139 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
E0920 21:26:00.962240   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/functional-007742/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-318139 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (3m27.645588486s)
--- PASS: TestJSONOutput/start/Command (207.65s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-318139 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.55s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-318139 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.55s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.6s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-318139 --output=json --user=testUser
E0920 21:26:32.650867   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-318139 --output=json --user=testUser: (7.601867084s)
--- PASS: TestJSONOutput/stop/Command (7.60s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-164940 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-164940 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (56.610335ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a11f4e42-3454-41f1-ad3c-bc064f30e165","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-164940] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"cf7ed5ad-5727-4338-af89-dad57f50bfff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19672"}}
	{"specversion":"1.0","id":"582abc9f-b87d-4f49-a8c9-61ac9f29b198","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5bcf0bce-1e13-4dea-92cb-f0f90a3786dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19672-9629/kubeconfig"}}
	{"specversion":"1.0","id":"3799cba6-a36d-420b-ae36-3d812988ab2b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9629/.minikube"}}
	{"specversion":"1.0","id":"7afca441-d3c3-454d-85e9-1a0d0172c96a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"75875446-b519-45d0-af67-5bfd7408f0fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c8c82f65-c427-4806-b81b-a350c46dc96e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-164940" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-164940
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (100.47s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-087754 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-087754 --driver=kvm2 : (45.940662466s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-100439 --driver=kvm2 
E0920 21:27:24.028672   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/functional-007742/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-100439 --driver=kvm2 : (51.754904155s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-087754
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-100439
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-100439" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-100439
helpers_test.go:175: Cleaning up "first-087754" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-087754
--- PASS: TestMinikubeProfile (100.47s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-945639 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-945639 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (27.163454332s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-945639 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-945639 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (32.78s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-960541 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-960541 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (31.778218638s)
--- PASS: TestMountStart/serial/StartWithMountSecond (32.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-960541 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-960541 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.87s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-945639 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-960541 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-960541 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-960541
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-960541: (2.275146044s)
--- PASS: TestMountStart/serial/Stop (2.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (26.2s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-960541
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-960541: (25.202119798s)
--- PASS: TestMountStart/serial/RestartStopped (26.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-960541 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-960541 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (128.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-884385 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E0920 21:31:00.962339   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/functional-007742/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:31:32.649596   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-884385 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m8.35932841s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (128.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-884385 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-884385 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-884385 -- rollout status deployment/busybox: (2.297163186s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-884385 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-884385 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-884385 -- exec busybox-7dff88458-vmczr -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-884385 -- exec busybox-7dff88458-vmczr -- nslookup kubernetes.io: (1.208963363s)
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-884385 -- exec busybox-7dff88458-wnnkj -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-884385 -- exec busybox-7dff88458-vmczr -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-884385 -- exec busybox-7dff88458-wnnkj -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-884385 -- exec busybox-7dff88458-vmczr -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-884385 -- exec busybox-7dff88458-wnnkj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.75s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-884385 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-884385 -- exec busybox-7dff88458-vmczr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-884385 -- exec busybox-7dff88458-vmczr -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-884385 -- exec busybox-7dff88458-wnnkj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-884385 -- exec busybox-7dff88458-wnnkj -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-884385 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-884385 -v 3 --alsologtostderr: (58.173572528s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.72s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-884385 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.56s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 cp testdata/cp-test.txt multinode-884385:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 ssh -n multinode-884385 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 cp multinode-884385:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2636137031/001/cp-test_multinode-884385.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 ssh -n multinode-884385 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 cp multinode-884385:/home/docker/cp-test.txt multinode-884385-m02:/home/docker/cp-test_multinode-884385_multinode-884385-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 ssh -n multinode-884385 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 ssh -n multinode-884385-m02 "sudo cat /home/docker/cp-test_multinode-884385_multinode-884385-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 cp multinode-884385:/home/docker/cp-test.txt multinode-884385-m03:/home/docker/cp-test_multinode-884385_multinode-884385-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 ssh -n multinode-884385 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 ssh -n multinode-884385-m03 "sudo cat /home/docker/cp-test_multinode-884385_multinode-884385-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 cp testdata/cp-test.txt multinode-884385-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 ssh -n multinode-884385-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 cp multinode-884385-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2636137031/001/cp-test_multinode-884385-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 ssh -n multinode-884385-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 cp multinode-884385-m02:/home/docker/cp-test.txt multinode-884385:/home/docker/cp-test_multinode-884385-m02_multinode-884385.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 ssh -n multinode-884385-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 ssh -n multinode-884385 "sudo cat /home/docker/cp-test_multinode-884385-m02_multinode-884385.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 cp multinode-884385-m02:/home/docker/cp-test.txt multinode-884385-m03:/home/docker/cp-test_multinode-884385-m02_multinode-884385-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 ssh -n multinode-884385-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 ssh -n multinode-884385-m03 "sudo cat /home/docker/cp-test_multinode-884385-m02_multinode-884385-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 cp testdata/cp-test.txt multinode-884385-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 ssh -n multinode-884385-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 cp multinode-884385-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2636137031/001/cp-test_multinode-884385-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 ssh -n multinode-884385-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 cp multinode-884385-m03:/home/docker/cp-test.txt multinode-884385:/home/docker/cp-test_multinode-884385-m03_multinode-884385.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 ssh -n multinode-884385-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 ssh -n multinode-884385 "sudo cat /home/docker/cp-test_multinode-884385-m03_multinode-884385.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 cp multinode-884385-m03:/home/docker/cp-test.txt multinode-884385-m02:/home/docker/cp-test_multinode-884385-m03_multinode-884385-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 ssh -n multinode-884385-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 ssh -n multinode-884385-m02 "sudo cat /home/docker/cp-test_multinode-884385-m03_multinode-884385-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.85s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-884385 node stop m03: (2.586099446s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-884385 status: exit status 7 (411.791257ms)

                                                
                                                
-- stdout --
	multinode-884385
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-884385-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-884385-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-884385 status --alsologtostderr: exit status 7 (409.911655ms)

                                                
                                                
-- stdout --
	multinode-884385
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-884385-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-884385-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 21:33:10.520219   43810 out.go:345] Setting OutFile to fd 1 ...
	I0920 21:33:10.520461   43810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:33:10.520470   43810 out.go:358] Setting ErrFile to fd 2...
	I0920 21:33:10.520491   43810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:33:10.520652   43810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9629/.minikube/bin
	I0920 21:33:10.520825   43810 out.go:352] Setting JSON to false
	I0920 21:33:10.520864   43810 mustload.go:65] Loading cluster: multinode-884385
	I0920 21:33:10.520967   43810 notify.go:220] Checking for updates...
	I0920 21:33:10.521259   43810 config.go:182] Loaded profile config "multinode-884385": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 21:33:10.521279   43810 status.go:174] checking status of multinode-884385 ...
	I0920 21:33:10.521707   43810 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 21:33:10.521766   43810 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:33:10.541123   43810 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42487
	I0920 21:33:10.541612   43810 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:33:10.542192   43810 main.go:141] libmachine: Using API Version  1
	I0920 21:33:10.542212   43810 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:33:10.542597   43810 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:33:10.542847   43810 main.go:141] libmachine: (multinode-884385) Calling .GetState
	I0920 21:33:10.544421   43810 status.go:364] multinode-884385 host status = "Running" (err=<nil>)
	I0920 21:33:10.544440   43810 host.go:66] Checking if "multinode-884385" exists ...
	I0920 21:33:10.544757   43810 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 21:33:10.544798   43810 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:33:10.559253   43810 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33131
	I0920 21:33:10.559558   43810 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:33:10.559977   43810 main.go:141] libmachine: Using API Version  1
	I0920 21:33:10.559996   43810 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:33:10.560278   43810 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:33:10.560431   43810 main.go:141] libmachine: (multinode-884385) Calling .GetIP
	I0920 21:33:10.563111   43810 main.go:141] libmachine: (multinode-884385) DBG | domain multinode-884385 has defined MAC address 52:54:00:e6:30:e1 in network mk-multinode-884385
	I0920 21:33:10.563525   43810 main.go:141] libmachine: (multinode-884385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:30:e1", ip: ""} in network mk-multinode-884385: {Iface:virbr1 ExpiryTime:2024-09-20 22:30:01 +0000 UTC Type:0 Mac:52:54:00:e6:30:e1 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-884385 Clientid:01:52:54:00:e6:30:e1}
	I0920 21:33:10.563557   43810 main.go:141] libmachine: (multinode-884385) DBG | domain multinode-884385 has defined IP address 192.168.39.247 and MAC address 52:54:00:e6:30:e1 in network mk-multinode-884385
	I0920 21:33:10.563661   43810 host.go:66] Checking if "multinode-884385" exists ...
	I0920 21:33:10.564036   43810 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 21:33:10.564076   43810 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:33:10.578288   43810 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39781
	I0920 21:33:10.578716   43810 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:33:10.579137   43810 main.go:141] libmachine: Using API Version  1
	I0920 21:33:10.579159   43810 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:33:10.579462   43810 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:33:10.579644   43810 main.go:141] libmachine: (multinode-884385) Calling .DriverName
	I0920 21:33:10.579838   43810 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 21:33:10.579862   43810 main.go:141] libmachine: (multinode-884385) Calling .GetSSHHostname
	I0920 21:33:10.582637   43810 main.go:141] libmachine: (multinode-884385) DBG | domain multinode-884385 has defined MAC address 52:54:00:e6:30:e1 in network mk-multinode-884385
	I0920 21:33:10.583048   43810 main.go:141] libmachine: (multinode-884385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:30:e1", ip: ""} in network mk-multinode-884385: {Iface:virbr1 ExpiryTime:2024-09-20 22:30:01 +0000 UTC Type:0 Mac:52:54:00:e6:30:e1 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-884385 Clientid:01:52:54:00:e6:30:e1}
	I0920 21:33:10.583079   43810 main.go:141] libmachine: (multinode-884385) DBG | domain multinode-884385 has defined IP address 192.168.39.247 and MAC address 52:54:00:e6:30:e1 in network mk-multinode-884385
	I0920 21:33:10.583228   43810 main.go:141] libmachine: (multinode-884385) Calling .GetSSHPort
	I0920 21:33:10.583474   43810 main.go:141] libmachine: (multinode-884385) Calling .GetSSHKeyPath
	I0920 21:33:10.583599   43810 main.go:141] libmachine: (multinode-884385) Calling .GetSSHUsername
	I0920 21:33:10.583734   43810 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9629/.minikube/machines/multinode-884385/id_rsa Username:docker}
	I0920 21:33:10.663668   43810 ssh_runner.go:195] Run: systemctl --version
	I0920 21:33:10.669772   43810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 21:33:10.684170   43810 kubeconfig.go:125] found "multinode-884385" server: "https://192.168.39.247:8443"
	I0920 21:33:10.684201   43810 api_server.go:166] Checking apiserver status ...
	I0920 21:33:10.684235   43810 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 21:33:10.697810   43810 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1877/cgroup
	W0920 21:33:10.711060   43810 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1877/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0920 21:33:10.711098   43810 ssh_runner.go:195] Run: ls
	I0920 21:33:10.715735   43810 api_server.go:253] Checking apiserver healthz at https://192.168.39.247:8443/healthz ...
	I0920 21:33:10.719883   43810 api_server.go:279] https://192.168.39.247:8443/healthz returned 200:
	ok
	I0920 21:33:10.719907   43810 status.go:456] multinode-884385 apiserver status = Running (err=<nil>)
	I0920 21:33:10.719917   43810 status.go:176] multinode-884385 status: &{Name:multinode-884385 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 21:33:10.719930   43810 status.go:174] checking status of multinode-884385-m02 ...
	I0920 21:33:10.720194   43810 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 21:33:10.720227   43810 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:33:10.734921   43810 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39015
	I0920 21:33:10.735342   43810 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:33:10.735806   43810 main.go:141] libmachine: Using API Version  1
	I0920 21:33:10.735825   43810 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:33:10.736141   43810 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:33:10.736325   43810 main.go:141] libmachine: (multinode-884385-m02) Calling .GetState
	I0920 21:33:10.737799   43810 status.go:364] multinode-884385-m02 host status = "Running" (err=<nil>)
	I0920 21:33:10.737814   43810 host.go:66] Checking if "multinode-884385-m02" exists ...
	I0920 21:33:10.738118   43810 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 21:33:10.738149   43810 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:33:10.752401   43810 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42075
	I0920 21:33:10.752838   43810 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:33:10.753327   43810 main.go:141] libmachine: Using API Version  1
	I0920 21:33:10.753345   43810 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:33:10.753601   43810 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:33:10.753773   43810 main.go:141] libmachine: (multinode-884385-m02) Calling .GetIP
	I0920 21:33:10.756091   43810 main.go:141] libmachine: (multinode-884385-m02) DBG | domain multinode-884385-m02 has defined MAC address 52:54:00:50:44:f6 in network mk-multinode-884385
	I0920 21:33:10.756432   43810 main.go:141] libmachine: (multinode-884385-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:44:f6", ip: ""} in network mk-multinode-884385: {Iface:virbr1 ExpiryTime:2024-09-20 22:31:14 +0000 UTC Type:0 Mac:52:54:00:50:44:f6 Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:multinode-884385-m02 Clientid:01:52:54:00:50:44:f6}
	I0920 21:33:10.756463   43810 main.go:141] libmachine: (multinode-884385-m02) DBG | domain multinode-884385-m02 has defined IP address 192.168.39.210 and MAC address 52:54:00:50:44:f6 in network mk-multinode-884385
	I0920 21:33:10.756611   43810 host.go:66] Checking if "multinode-884385-m02" exists ...
	I0920 21:33:10.756893   43810 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 21:33:10.756922   43810 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:33:10.771216   43810 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42109
	I0920 21:33:10.771564   43810 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:33:10.771978   43810 main.go:141] libmachine: Using API Version  1
	I0920 21:33:10.772001   43810 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:33:10.772335   43810 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:33:10.772502   43810 main.go:141] libmachine: (multinode-884385-m02) Calling .DriverName
	I0920 21:33:10.772685   43810 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 21:33:10.772732   43810 main.go:141] libmachine: (multinode-884385-m02) Calling .GetSSHHostname
	I0920 21:33:10.775437   43810 main.go:141] libmachine: (multinode-884385-m02) DBG | domain multinode-884385-m02 has defined MAC address 52:54:00:50:44:f6 in network mk-multinode-884385
	I0920 21:33:10.775899   43810 main.go:141] libmachine: (multinode-884385-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:44:f6", ip: ""} in network mk-multinode-884385: {Iface:virbr1 ExpiryTime:2024-09-20 22:31:14 +0000 UTC Type:0 Mac:52:54:00:50:44:f6 Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:multinode-884385-m02 Clientid:01:52:54:00:50:44:f6}
	I0920 21:33:10.775920   43810 main.go:141] libmachine: (multinode-884385-m02) DBG | domain multinode-884385-m02 has defined IP address 192.168.39.210 and MAC address 52:54:00:50:44:f6 in network mk-multinode-884385
	I0920 21:33:10.776078   43810 main.go:141] libmachine: (multinode-884385-m02) Calling .GetSSHPort
	I0920 21:33:10.776240   43810 main.go:141] libmachine: (multinode-884385-m02) Calling .GetSSHKeyPath
	I0920 21:33:10.776373   43810 main.go:141] libmachine: (multinode-884385-m02) Calling .GetSSHUsername
	I0920 21:33:10.776505   43810 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9629/.minikube/machines/multinode-884385-m02/id_rsa Username:docker}
	I0920 21:33:10.855566   43810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 21:33:10.869128   43810 status.go:176] multinode-884385-m02 status: &{Name:multinode-884385-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0920 21:33:10.869158   43810 status.go:174] checking status of multinode-884385-m03 ...
	I0920 21:33:10.869470   43810 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 21:33:10.869506   43810 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:33:10.885245   43810 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34131
	I0920 21:33:10.885664   43810 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:33:10.886152   43810 main.go:141] libmachine: Using API Version  1
	I0920 21:33:10.886172   43810 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:33:10.886459   43810 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:33:10.886650   43810 main.go:141] libmachine: (multinode-884385-m03) Calling .GetState
	I0920 21:33:10.888144   43810 status.go:364] multinode-884385-m03 host status = "Stopped" (err=<nil>)
	I0920 21:33:10.888160   43810 status.go:377] host is not running, skipping remaining checks
	I0920 21:33:10.888166   43810 status.go:176] multinode-884385-m03 status: &{Name:multinode-884385-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.41s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (42.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-884385 node start m03 -v=7 --alsologtostderr: (41.518117884s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (42.12s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (290.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-884385
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-884385
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-884385: (28.136310282s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-884385 --wait=true -v=8 --alsologtostderr
E0920 21:36:00.962838   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/functional-007742/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:36:32.650425   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-884385 --wait=true -v=8 --alsologtostderr: (4m22.26340794s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-884385
--- PASS: TestMultiNode/serial/RestartKeepsNodes (290.49s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-884385 node delete m03: (1.70699645s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-884385 stop: (24.892762922s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-884385 status: exit status 7 (80.60568ms)

                                                
                                                
-- stdout --
	multinode-884385
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-884385-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-884385 status --alsologtostderr: exit status 7 (81.172514ms)

                                                
                                                
-- stdout --
	multinode-884385
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-884385-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 21:39:10.714731   45943 out.go:345] Setting OutFile to fd 1 ...
	I0920 21:39:10.714821   45943 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:39:10.714828   45943 out.go:358] Setting ErrFile to fd 2...
	I0920 21:39:10.714832   45943 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:39:10.715008   45943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9629/.minikube/bin
	I0920 21:39:10.715153   45943 out.go:352] Setting JSON to false
	I0920 21:39:10.715180   45943 mustload.go:65] Loading cluster: multinode-884385
	I0920 21:39:10.715227   45943 notify.go:220] Checking for updates...
	I0920 21:39:10.715520   45943 config.go:182] Loaded profile config "multinode-884385": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 21:39:10.715538   45943 status.go:174] checking status of multinode-884385 ...
	I0920 21:39:10.715948   45943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 21:39:10.716007   45943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:39:10.736295   45943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46255
	I0920 21:39:10.736804   45943 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:39:10.737301   45943 main.go:141] libmachine: Using API Version  1
	I0920 21:39:10.737318   45943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:39:10.737740   45943 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:39:10.737941   45943 main.go:141] libmachine: (multinode-884385) Calling .GetState
	I0920 21:39:10.739503   45943 status.go:364] multinode-884385 host status = "Stopped" (err=<nil>)
	I0920 21:39:10.739518   45943 status.go:377] host is not running, skipping remaining checks
	I0920 21:39:10.739526   45943 status.go:176] multinode-884385 status: &{Name:multinode-884385 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 21:39:10.739557   45943 status.go:174] checking status of multinode-884385-m02 ...
	I0920 21:39:10.739984   45943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 21:39:10.740028   45943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:39:10.753958   45943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39979
	I0920 21:39:10.754340   45943 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:39:10.754740   45943 main.go:141] libmachine: Using API Version  1
	I0920 21:39:10.754758   45943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:39:10.755035   45943 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:39:10.755206   45943 main.go:141] libmachine: (multinode-884385-m02) Calling .GetState
	I0920 21:39:10.756578   45943 status.go:364] multinode-884385-m02 host status = "Stopped" (err=<nil>)
	I0920 21:39:10.756592   45943 status.go:377] host is not running, skipping remaining checks
	I0920 21:39:10.756599   45943 status.go:176] multinode-884385-m02 status: &{Name:multinode-884385-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.05s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (117.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-884385 --wait=true -v=8 --alsologtostderr --driver=kvm2 
E0920 21:39:35.718371   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:41:00.962743   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/functional-007742/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-884385 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (1m57.190499023s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-884385 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (117.69s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (52.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-884385
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-884385-m02 --driver=kvm2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-884385-m02 --driver=kvm2 : exit status 14 (58.16592ms)

                                                
                                                
-- stdout --
	* [multinode-884385-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-9629/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9629/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-884385-m02' is duplicated with machine name 'multinode-884385-m02' in profile 'multinode-884385'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-884385-m03 --driver=kvm2 
E0920 21:41:32.650290   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-884385-m03 --driver=kvm2 : (51.456447326s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-884385
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-884385: exit status 80 (208.049717ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-884385 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-884385-m03 already exists in multinode-884385-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-884385-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (52.54s)

                                                
                                    
x
+
TestPreload (151.09s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-515303 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-515303 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (1m21.553550134s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-515303 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-515303
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-515303: (12.572844289s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-515303 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
E0920 21:44:04.030615   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/functional-007742/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-515303 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (54.919214094s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-515303 image list
helpers_test.go:175: Cleaning up "test-preload-515303" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-515303
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-515303: (1.022485335s)
--- PASS: TestPreload (151.09s)

                                                
                                    
x
+
TestScheduledStopUnix (122.17s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-753126 --memory=2048 --driver=kvm2 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-753126 --memory=2048 --driver=kvm2 : (50.629922766s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-753126 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-753126 -n scheduled-stop-753126
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-753126 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0920 21:45:24.610531   16802 retry.go:31] will retry after 55.174µs: open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/scheduled-stop-753126/pid: no such file or directory
I0920 21:45:24.611688   16802 retry.go:31] will retry after 105.933µs: open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/scheduled-stop-753126/pid: no such file or directory
I0920 21:45:24.612851   16802 retry.go:31] will retry after 218.721µs: open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/scheduled-stop-753126/pid: no such file or directory
I0920 21:45:24.613983   16802 retry.go:31] will retry after 479.193µs: open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/scheduled-stop-753126/pid: no such file or directory
I0920 21:45:24.615114   16802 retry.go:31] will retry after 743.134µs: open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/scheduled-stop-753126/pid: no such file or directory
I0920 21:45:24.616235   16802 retry.go:31] will retry after 465.998µs: open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/scheduled-stop-753126/pid: no such file or directory
I0920 21:45:24.617363   16802 retry.go:31] will retry after 795.808µs: open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/scheduled-stop-753126/pid: no such file or directory
I0920 21:45:24.618503   16802 retry.go:31] will retry after 1.011391ms: open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/scheduled-stop-753126/pid: no such file or directory
I0920 21:45:24.619618   16802 retry.go:31] will retry after 3.216661ms: open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/scheduled-stop-753126/pid: no such file or directory
I0920 21:45:24.623833   16802 retry.go:31] will retry after 3.237901ms: open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/scheduled-stop-753126/pid: no such file or directory
I0920 21:45:24.628115   16802 retry.go:31] will retry after 5.781283ms: open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/scheduled-stop-753126/pid: no such file or directory
I0920 21:45:24.634330   16802 retry.go:31] will retry after 7.041225ms: open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/scheduled-stop-753126/pid: no such file or directory
I0920 21:45:24.641473   16802 retry.go:31] will retry after 16.813623ms: open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/scheduled-stop-753126/pid: no such file or directory
I0920 21:45:24.658729   16802 retry.go:31] will retry after 21.00515ms: open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/scheduled-stop-753126/pid: no such file or directory
I0920 21:45:24.680014   16802 retry.go:31] will retry after 28.820505ms: open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/scheduled-stop-753126/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-753126 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-753126 -n scheduled-stop-753126
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-753126
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-753126 --schedule 15s
E0920 21:46:00.962150   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/functional-007742/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0920 21:46:32.650940   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-753126
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-753126: exit status 7 (64.582163ms)

                                                
                                                
-- stdout --
	scheduled-stop-753126
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-753126 -n scheduled-stop-753126
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-753126 -n scheduled-stop-753126: exit status 7 (64.242076ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-753126" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-753126
--- PASS: TestScheduledStopUnix (122.17s)

                                                
                                    
x
+
TestSkaffold (124.37s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1615229770 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-267453 --memory=2600 --driver=kvm2 
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-267453 --memory=2600 --driver=kvm2 : (46.281087526s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1615229770 run --minikube-profile skaffold-267453 --kube-context skaffold-267453 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1615229770 run --minikube-profile skaffold-267453 --kube-context skaffold-267453 --status-check=true --port-forward=false --interactive=false: (1m5.479665159s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-67cbcb465-mhhhp" [6d73568d-57a7-4f59-acf8-1e22eeb58d46] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004492726s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-67bc66d7b9-2zgbz" [8af9be6d-8d4b-4030-ae15-8fd94c5523c8] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004108222s
helpers_test.go:175: Cleaning up "skaffold-267453" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-267453
--- PASS: TestSkaffold (124.37s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (187.61s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.979943562 start -p running-upgrade-721666 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.979943562 start -p running-upgrade-721666 --memory=2200 --vm-driver=kvm2 : (2m29.422522965s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-721666 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-721666 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (36.557824028s)
helpers_test.go:175: Cleaning up "running-upgrade-721666" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-721666
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-721666: (1.187179227s)
--- PASS: TestRunningBinaryUpgrade (187.61s)

                                                
                                    
x
+
TestKubernetesUpgrade (209.9s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-778847 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-778847 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 : (1m48.223581166s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-778847
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-778847: (3.663274047s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-778847 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-778847 status --format={{.Host}}: exit status 7 (76.05669ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-778847 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2 
E0920 21:51:00.962343   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/functional-007742/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-778847 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2 : (1m3.021932805s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-778847 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-778847 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-778847 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 : exit status 106 (111.621532ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-778847] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-9629/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9629/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-778847
	    minikube start -p kubernetes-upgrade-778847 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7788472 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-778847 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-778847 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-778847 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2 : (33.586597505s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-778847" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-778847
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-778847: (1.132918726s)
--- PASS: TestKubernetesUpgrade (209.90s)

                                                
                                    
x
+
TestPause/serial/Start (91.5s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-745346 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-745346 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m31.498039809s)
--- PASS: TestPause/serial/Start (91.50s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (60.37s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-745346 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-745346 --alsologtostderr -v=1 --driver=kvm2 : (1m0.338312538s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (60.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (161.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.886129508 start -p stopped-upgrade-543527 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.886129508 start -p stopped-upgrade-543527 --memory=2200 --vm-driver=kvm2 : (1m20.717205872s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.886129508 -p stopped-upgrade-543527 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.886129508 -p stopped-upgrade-543527 stop: (12.690280819s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-543527 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-543527 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m8.052706899s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (161.46s)

                                                
                                    
x
+
TestPause/serial/Pause (0.58s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-745346 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.58s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-745346 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-745346 --output=json --layout=cluster: exit status 2 (235.88127ms)

                                                
                                                
-- stdout --
	{"Name":"pause-745346","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-745346","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.24s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.56s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-745346 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.56s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.75s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-745346 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.75s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.84s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-745346 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.84s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.64s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-543527
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-543527: (1.040018875s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-418393 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-418393 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (354.239632ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-418393] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-9629/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9629/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (67.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-418393 --driver=kvm2 
E0920 21:54:50.168652   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/skaffold-267453/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-418393 --driver=kvm2 : (1m7.361502736s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-418393 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (67.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (110.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-358249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-358249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m50.72330512s)
--- PASS: TestNetworkPlugins/group/auto/Start (110.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (12.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-418393 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-418393 --no-kubernetes --driver=kvm2 : (11.496311149s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-418393 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-418393 status -o json: exit status 2 (256.621799ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-418393","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-418393
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-418393: (1.072057599s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (12.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (76.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-358249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-358249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m16.608660518s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (76.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (52.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-418393 --no-kubernetes --driver=kvm2 
E0920 21:56:00.962242   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/functional-007742/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:56:12.090540   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/skaffold-267453/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:56:15.720262   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:56:32.649512   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-418393 --no-kubernetes --driver=kvm2 : (52.469704768s)
--- PASS: TestNoKubernetes/serial/Start (52.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-418393 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-418393 "sudo systemctl is-active --quiet service kubelet": exit status 1 (202.649249ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (29.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (15.566981514s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (13.740190639s)
--- PASS: TestNoKubernetes/serial/ProfileList (29.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-358249 "pgrep -a kubelet"
I0920 21:56:42.675267   16802 config.go:182] Loaded profile config "auto-358249": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-358249 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xfw2g" [a0e311cd-f009-48cc-b807-f43dfdf40f21] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-xfw2g" [a0e311cd-f009-48cc-b807-f43dfdf40f21] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003499884s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-358249 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-358249 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-358249 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-bmjl2" [c837feb2-a8ab-41d5-96ea-d3c9e951a2d6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005054171s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-358249 "pgrep -a kubelet"
I0920 21:57:07.287835   16802 config.go:182] Loaded profile config "kindnet-358249": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-358249 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-qv596" [dd5d275d-1386-4560-83e8-27775fd13dd0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-qv596" [dd5d275d-1386-4560-83e8-27775fd13dd0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.005920794s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (86.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-358249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-358249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (1m26.883755737s)
--- PASS: TestNetworkPlugins/group/calico/Start (86.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-418393
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-418393: (2.301682703s)
--- PASS: TestNoKubernetes/serial/Stop (2.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (51.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-418393 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-418393 --driver=kvm2 : (51.119388061s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (51.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-358249 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-358249 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-358249 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (98.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-358249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-358249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m38.325478492s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (98.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (141.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-358249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-358249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (2m21.507114749s)
--- PASS: TestNetworkPlugins/group/false/Start (141.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-418393 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-418393 "sudo systemctl is-active --quiet service kubelet": exit status 1 (193.406341ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (142.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-358249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
E0920 21:58:28.229907   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/skaffold-267453/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-358249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (2m22.3265481s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (142.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-572zg" [ffb78327-a33f-4c65-a680-7f2df900c22e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005084613s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-358249 "pgrep -a kubelet"
I0920 21:58:42.189264   16802 config.go:182] Loaded profile config "calico-358249": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-358249 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7f46c" [93885ccb-6f37-4c9a-a3e8-f5cd8dc943ff] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0920 21:58:44.464014   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/gvisor-923979/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:58:44.470373   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/gvisor-923979/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:58:44.481709   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/gvisor-923979/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:58:44.503068   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/gvisor-923979/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:58:44.544459   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/gvisor-923979/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:58:44.625909   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/gvisor-923979/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:58:44.787478   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/gvisor-923979/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:58:45.109185   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/gvisor-923979/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:58:45.751204   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/gvisor-923979/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:58:47.033032   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/gvisor-923979/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-7f46c" [93885ccb-6f37-4c9a-a3e8-f5cd8dc943ff] Running
E0920 21:58:49.595002   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/gvisor-923979/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.00527956s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-358249 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-358249 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-358249 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-358249 "pgrep -a kubelet"
I0920 21:59:08.209102   16802 config.go:182] Loaded profile config "custom-flannel-358249": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-358249 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gxxsb" [81ab6147-a8a6-4944-9bfa-024a92f79ba3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gxxsb" [81ab6147-a8a6-4944-9bfa-024a92f79ba3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.006681517s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (83.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-358249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-358249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m23.340591883s)
--- PASS: TestNetworkPlugins/group/flannel/Start (83.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-358249 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-358249 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-358249 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (82.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-358249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-358249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m22.36386687s)
--- PASS: TestNetworkPlugins/group/bridge/Start (82.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-358249 "pgrep -a kubelet"
I0920 21:59:56.767555   16802 config.go:182] Loaded profile config "false-358249": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (13.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-358249 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nskzn" [e6e963a5-d74c-4a54-acf5-334f548c0f2b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-nskzn" [e6e963a5-d74c-4a54-acf5-334f548c0f2b] Running
E0920 22:00:06.404282   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/gvisor-923979/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 13.005269032s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (13.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-358249 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-358249 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-358249 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (97.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-358249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-358249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m37.773215882s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (97.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-358249 "pgrep -a kubelet"
I0920 22:00:28.422243   16802 config.go:182] Loaded profile config "enable-default-cni-358249": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-358249 replace --force -f testdata/netcat-deployment.yaml
I0920 22:00:28.924096   16802 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7h7vs" [29608acc-887b-45be-9fbf-6e3fa4132f55] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-7h7vs" [29608acc-887b-45be-9fbf-6e3fa4132f55] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.006662141s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-hgcn8" [c27a8102-e1e7-4b2f-9a6b-f949977b89f9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004530618s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-358249 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-358249 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-358249 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-358249 "pgrep -a kubelet"
I0920 22:00:41.490328   16802 config.go:182] Loaded profile config "flannel-358249": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-358249 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5xjbz" [ba3da0ee-be51-44f3-b741-65af1b01cdcb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0920 22:00:44.032732   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/functional-007742/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-5xjbz" [ba3da0ee-be51-44f3-b741-65af1b01cdcb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.005570633s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-358249 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-358249 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-358249 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (176.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-178045 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-178045 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (2m56.151711349s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (176.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-358249 "pgrep -a kubelet"
I0920 22:00:59.869010   16802 config.go:182] Loaded profile config "bridge-358249": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (14.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-358249 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-jknp6" [7c79cfe6-da17-46e9-85dd-1096216ce887] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0920 22:01:00.962594   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/functional-007742/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-jknp6" [7c79cfe6-da17-46e9-85dd-1096216ce887] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 14.005200909s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (14.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (121.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-465436 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-465436 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.1: (2m1.350831954s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (121.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-358249 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-358249 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-358249 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (91.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-622739 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.1
E0920 22:01:32.649737   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:01:42.891962   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/auto-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:01:42.898364   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/auto-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:01:42.909700   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/auto-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:01:42.931098   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/auto-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:01:42.972559   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/auto-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:01:43.053905   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/auto-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:01:43.215450   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/auto-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:01:43.537223   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/auto-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:01:44.179376   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/auto-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:01:45.461719   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/auto-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:01:48.023477   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/auto-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:01:53.145112   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/auto-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:02:01.084293   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/kindnet-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:02:01.090827   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/kindnet-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:02:01.102204   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/kindnet-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:02:01.123634   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/kindnet-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:02:01.165041   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/kindnet-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:02:01.246483   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/kindnet-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:02:01.407887   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/kindnet-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:02:01.729512   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/kindnet-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:02:02.371738   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/kindnet-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:02:03.386903   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/auto-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:02:03.653972   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/kindnet-358249/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-622739 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.1: (1m31.45227763s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (91.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-358249 "pgrep -a kubelet"
I0920 22:02:04.535316   16802 config.go:182] Loaded profile config "kubenet-358249": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-358249 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mvhjb" [4c1b57f3-12b7-4f1c-b1d3-a7b2a582a295] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0920 22:02:06.215315   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/kindnet-358249/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-mvhjb" [4c1b57f3-12b7-4f1c-b1d3-a7b2a582a295] Running
E0920 22:02:11.336659   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/kindnet-358249/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.006656443s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-358249 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-358249 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-358249 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.17s)
E0920 22:09:36.151705   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/custom-flannel-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:09:48.709823   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/kubenet-358249/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (97.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-264920 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.1
E0920 22:02:42.060409   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/kindnet-358249/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-264920 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.1: (1m37.263615675s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (97.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-622739 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ff6bdf7d-096c-463d-808e-1358cfb54cbb] Pending
helpers_test.go:344: "busybox" [ff6bdf7d-096c-463d-808e-1358cfb54cbb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0920 22:03:04.829920   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/auto-358249/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [ff6bdf7d-096c-463d-808e-1358cfb54cbb] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.005221189s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-622739 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-622739 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-622739 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.105818716s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-622739 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-622739 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-622739 --alsologtostderr -v=3: (13.329195256s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-465436 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [efcc73c0-71b2-48fa-94d5-f3067e43c335] Pending
helpers_test.go:344: "busybox" [efcc73c0-71b2-48fa-94d5-f3067e43c335] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [efcc73c0-71b2-48fa-94d5-f3067e43c335] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.005835306s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-465436 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-465436 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-465436 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-465436 --alsologtostderr -v=3
E0920 22:03:23.021700   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/kindnet-358249/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-465436 --alsologtostderr -v=3: (13.339097985s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-622739 -n embed-certs-622739
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-622739 -n embed-certs-622739: exit status 7 (62.033235ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-622739 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (305.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-622739 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.1
E0920 22:03:28.230350   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/skaffold-267453/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-622739 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.1: (5m5.495001495s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-622739 -n embed-certs-622739
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (305.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-465436 -n no-preload-465436
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-465436 -n no-preload-465436: exit status 7 (65.343823ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-465436 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (319.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-465436 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.1
E0920 22:03:35.969726   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/calico-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:03:35.976261   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/calico-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:03:35.987703   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/calico-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:03:36.009122   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/calico-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:03:36.050520   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/calico-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:03:36.132044   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/calico-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:03:36.293616   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/calico-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:03:36.615393   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/calico-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:03:37.257588   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/calico-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:03:38.539795   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/calico-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:03:41.101229   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/calico-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:03:44.463748   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/gvisor-923979/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:03:46.222915   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/calico-358249/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-465436 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.1: (5m19.204945512s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-465436 -n no-preload-465436
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (319.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-178045 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [abdf86a6-ae56-419c-9e2f-52ffac68b34c] Pending
helpers_test.go:344: "busybox" [abdf86a6-ae56-419c-9e2f-52ffac68b34c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [abdf86a6-ae56-419c-9e2f-52ffac68b34c] Running
E0920 22:03:56.464776   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/calico-358249/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.005517603s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-178045 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-178045 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-178045 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-178045 --alsologtostderr -v=3
E0920 22:04:08.449546   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/custom-flannel-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:04:08.455945   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/custom-flannel-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:04:08.467363   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/custom-flannel-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:04:08.488775   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/custom-flannel-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:04:08.530240   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/custom-flannel-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:04:08.611674   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/custom-flannel-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:04:08.773596   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/custom-flannel-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:04:09.095104   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/custom-flannel-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:04:09.737418   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/custom-flannel-358249/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-178045 --alsologtostderr -v=3: (13.354066599s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-264920 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [edc71d85-6925-48bb-a4c5-727e4d686652] Pending
E0920 22:04:11.019773   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/custom-flannel-358249/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [edc71d85-6925-48bb-a4c5-727e4d686652] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0920 22:04:12.168813   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/gvisor-923979/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:04:13.581369   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/custom-flannel-358249/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [edc71d85-6925-48bb-a4c5-727e4d686652] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004297956s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-264920 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-178045 -n old-k8s-version-178045
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-178045 -n old-k8s-version-178045: exit status 7 (64.964187ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-178045 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (407.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-178045 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
E0920 22:04:16.947629   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/calico-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:04:18.703157   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/custom-flannel-358249/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-178045 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (6m47.411535585s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-178045 -n old-k8s-version-178045
E0920 22:11:02.985786   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/flannel-358249/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (407.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-264920 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-264920 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.050761522s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-264920 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-264920 --alsologtostderr -v=3
E0920 22:04:26.751713   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/auto-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:04:28.944751   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/custom-flannel-358249/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-264920 --alsologtostderr -v=3: (13.341040267s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-264920 -n default-k8s-diff-port-264920
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-264920 -n default-k8s-diff-port-264920: exit status 7 (65.016438ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-264920 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (316.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-264920 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.1
E0920 22:04:44.943186   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/kindnet-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:04:49.426298   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/custom-flannel-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:04:57.035581   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/false-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:04:57.042030   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/false-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:04:57.053986   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/false-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:04:57.075424   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/false-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:04:57.116838   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/false-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:04:57.198121   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/false-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:04:57.360411   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/false-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:04:57.682232   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/false-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:04:57.909773   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/calico-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:04:58.323793   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/false-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:04:59.605115   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/false-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:05:02.167151   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/false-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:05:07.288954   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/false-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:05:17.530231   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/false-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:05:28.910368   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/enable-default-cni-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:05:28.916749   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/enable-default-cni-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:05:28.928111   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/enable-default-cni-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:05:28.949472   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/enable-default-cni-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:05:28.990911   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/enable-default-cni-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:05:29.072324   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/enable-default-cni-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:05:29.233976   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/enable-default-cni-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:05:29.555791   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/enable-default-cni-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:05:30.197772   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/enable-default-cni-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:05:30.388306   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/custom-flannel-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:05:31.479394   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/enable-default-cni-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:05:34.041471   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/enable-default-cni-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:05:35.283380   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/flannel-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:05:35.289765   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/flannel-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:05:35.301104   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/flannel-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:05:35.322452   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/flannel-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:05:35.363863   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/flannel-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:05:35.445293   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/flannel-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:05:35.606828   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/flannel-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:05:35.928562   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/flannel-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:05:36.570622   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/flannel-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:05:37.852348   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/flannel-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:05:38.011993   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/false-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:05:39.163125   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/enable-default-cni-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:05:40.413688   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/flannel-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:05:45.535264   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/flannel-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:05:49.404532   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/enable-default-cni-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:05:55.777231   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/flannel-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:06:00.153714   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/bridge-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:06:00.160109   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/bridge-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:06:00.171544   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/bridge-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:06:00.193194   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/bridge-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:06:00.234643   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/bridge-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:06:00.316147   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/bridge-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:06:00.477706   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/bridge-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:06:00.800008   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/bridge-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:06:00.962684   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/functional-007742/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:06:01.441961   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/bridge-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:06:02.724030   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/bridge-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:06:05.286167   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/bridge-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:06:09.886740   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/enable-default-cni-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:06:10.407430   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/bridge-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:06:16.259242   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/flannel-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:06:18.973408   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/false-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:06:19.831748   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/calico-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:06:20.649317   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/bridge-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:06:32.649971   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/addons-022099/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:06:41.131027   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/bridge-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:06:42.891382   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/auto-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:06:50.848940   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/enable-default-cni-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:06:52.309921   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/custom-flannel-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:06:57.221572   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/flannel-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:07:01.083912   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/kindnet-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:07:04.849898   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/kubenet-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:07:04.856276   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/kubenet-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:07:04.867612   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/kubenet-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:07:04.888968   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/kubenet-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:07:04.930353   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/kubenet-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:07:05.011807   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/kubenet-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:07:05.173374   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/kubenet-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:07:05.495260   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/kubenet-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:07:06.137130   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/kubenet-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:07:07.419217   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/kubenet-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:07:09.980975   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/kubenet-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:07:10.593778   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/auto-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:07:15.102662   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/kubenet-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:07:22.092616   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/bridge-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:07:25.344223   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/kubenet-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:07:28.784627   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/kindnet-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:07:40.895615   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/false-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:07:45.825835   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/kubenet-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:08:12.770735   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/enable-default-cni-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:08:19.143894   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/flannel-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:08:26.788093   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/kubenet-358249/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:08:28.230712   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/skaffold-267453/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-264920 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.1: (5m16.443729337s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-264920 -n default-k8s-diff-port-264920
E0920 22:09:51.294529   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/skaffold-267453/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (316.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-l8mtc" [f9aa23d5-5c89-471e-98b8-c2042224bb99] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-695b96c756-l8mtc" [f9aa23d5-5c89-471e-98b8-c2042224bb99] Running
E0920 22:08:35.970310   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/calico-358249/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004816299s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-l8mtc" [f9aa23d5-5c89-471e-98b8-c2042224bb99] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00527347s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-622739 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-622739 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-622739 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-622739 -n embed-certs-622739
E0920 22:08:44.014790   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/bridge-358249/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-622739 -n embed-certs-622739: exit status 2 (256.235036ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-622739 -n embed-certs-622739
E0920 22:08:44.463353   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/gvisor-923979/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-622739 -n embed-certs-622739: exit status 2 (239.331867ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-622739 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-622739 -n embed-certs-622739
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-622739 -n embed-certs-622739
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (63.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-218802 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-218802 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.1: (1m3.670836341s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (63.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-4khcg" [eafb5010-796c-41c0-abcd-b09c2cec2f83] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-695b96c756-4khcg" [eafb5010-796c-41c0-abcd-b09c2cec2f83] Running
E0920 22:09:03.673949   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/calico-358249/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.00459738s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-4khcg" [eafb5010-796c-41c0-abcd-b09c2cec2f83] Running
E0920 22:09:08.449575   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/custom-flannel-358249/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003523649s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-465436 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-465436 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-465436 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-465436 -n no-preload-465436
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-465436 -n no-preload-465436: exit status 2 (236.766258ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-465436 -n no-preload-465436
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-465436 -n no-preload-465436: exit status 2 (253.952964ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-465436 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-465436 -n no-preload-465436
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-465436 -n no-preload-465436
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-218802 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-k8kk7" [4d530855-76fe-44e8-8d80-50d2d41b1ab6] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-695b96c756-k8kk7" [4d530855-76fe-44e8-8d80-50d2d41b1ab6] Running
E0920 22:09:57.034678   16802 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9629/.minikube/profiles/false-358249/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.004352122s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-218802 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-218802 --alsologtostderr -v=3: (13.36016262s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (13.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-k8kk7" [4d530855-76fe-44e8-8d80-50d2d41b1ab6] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004957679s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-264920 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-264920 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-264920 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-264920 -n default-k8s-diff-port-264920
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-264920 -n default-k8s-diff-port-264920: exit status 2 (237.522035ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-264920 -n default-k8s-diff-port-264920
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-264920 -n default-k8s-diff-port-264920: exit status 2 (237.596928ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-264920 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-264920 -n default-k8s-diff-port-264920
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-264920 -n default-k8s-diff-port-264920
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-218802 -n newest-cni-218802
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-218802 -n newest-cni-218802: exit status 7 (66.352249ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-218802 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-218802 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-218802 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.1: (37.603231548s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-218802 -n newest-cni-218802
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-218802 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-218802 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-218802 -n newest-cni-218802
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-218802 -n newest-cni-218802: exit status 2 (228.062039ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-218802 -n newest-cni-218802
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-218802 -n newest-cni-218802: exit status 2 (255.438903ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-218802 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-218802 -n newest-cni-218802
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-218802 -n newest-cni-218802
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-zgrb9" [1012b908-3395-49e7-8e7c-5bbf1ee46bf5] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004311765s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-zgrb9" [1012b908-3395-49e7-8e7c-5bbf1ee46bf5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004216681s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-178045 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-178045 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-178045 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-178045 -n old-k8s-version-178045
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-178045 -n old-k8s-version-178045: exit status 2 (231.567802ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-178045 -n old-k8s-version-178045
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-178045 -n old-k8s-version-178045: exit status 2 (234.047887ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-178045 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-178045 -n old-k8s-version-178045
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-178045 -n old-k8s-version-178045
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.27s)

                                                
                                    

Test skip (31/340)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-358249 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-358249

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-358249

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-358249

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-358249

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-358249

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-358249

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-358249

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-358249

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-358249

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-358249

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-358249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-358249"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-358249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-358249"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-358249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-358249"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-358249

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-358249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-358249"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-358249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-358249"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-358249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-358249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-358249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-358249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-358249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-358249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-358249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-358249" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-358249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-358249"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-358249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-358249"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-358249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-358249"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-358249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-358249"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-358249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-358249"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-358249

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-358249

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-358249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-358249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-358249

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-358249

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-358249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-358249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-358249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-358249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-358249" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-358249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-358249"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-358249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-358249"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-358249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-358249"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-358249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-358249"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-358249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-358249"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-358249

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-358249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-358249"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-358249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-358249"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-358249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-358249"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-358249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-358249"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-358249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-358249"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-358249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-358249"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-358249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-358249"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-358249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-358249"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-358249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-358249"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-358249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-358249"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-358249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-358249"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-358249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-358249"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-358249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-358249"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-358249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-358249"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-358249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-358249"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-358249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-358249"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-358249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-358249"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-358249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-358249"

                                                
                                                
----------------------- debugLogs end: cilium-358249 [took: 3.25456207s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-358249" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-358249
--- SKIP: TestNetworkPlugins/group/cilium (3.39s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-521947" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-521947
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard