Test Report: KVM_Linux_containerd 17735

                    
                      92ccbd1049dad7c606832f9da24cf8bb40191acf:2024-03-27:33769
                    
                

Test fail (1/333)

Order failed test Duration
46 TestAddons/parallel/CloudSpanner 8.02
x
+
TestAddons/parallel/CloudSpanner (8.02s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5446596998-s4r7l" [083a9976-d750-4e6b-9856-705ad08ea6e7] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.009458081s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-336680
addons_test.go:860: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable cloud-spanner -p addons-336680: exit status 11 (397.642185ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-03-27T20:01:27Z" level=error msg="stat /run/containerd/runc/k8s.io/a5e00b6c972b96d0b1b8f4f3f8c802e3508e6c34b498ed223478ab48b454dbee: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:861: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 addons disable cloud-spanner -p addons-336680" : exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-336680 -n addons-336680
helpers_test.go:244: <<< TestAddons/parallel/CloudSpanner FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CloudSpanner]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-336680 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-336680 logs -n 25: (1.741907189s)
helpers_test.go:252: TestAddons/parallel/CloudSpanner logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-952559 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:58 UTC |                     |
	|         | -p download-only-952559                                                                     |                      |         |                |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                                                         |                      |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |                |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |                |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:58 UTC | 27 Mar 24 19:58 UTC |
	| delete  | -p download-only-952559                                                                     | download-only-952559 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:58 UTC | 27 Mar 24 19:58 UTC |
	| delete  | -p download-only-449432                                                                     | download-only-449432 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:58 UTC | 27 Mar 24 19:58 UTC |
	| delete  | -p download-only-819311                                                                     | download-only-819311 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:58 UTC | 27 Mar 24 19:58 UTC |
	| delete  | -p download-only-952559                                                                     | download-only-952559 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:58 UTC | 27 Mar 24 19:58 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-799416 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:58 UTC |                     |
	|         | binary-mirror-799416                                                                        |                      |         |                |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |                |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |                |                     |                     |
	|         | http://127.0.0.1:42681                                                                      |                      |         |                |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |                |                     |                     |
	| delete  | -p binary-mirror-799416                                                                     | binary-mirror-799416 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:58 UTC | 27 Mar 24 19:58 UTC |
	| addons  | disable dashboard -p                                                                        | addons-336680        | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:58 UTC |                     |
	|         | addons-336680                                                                               |                      |         |                |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-336680        | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:58 UTC |                     |
	|         | addons-336680                                                                               |                      |         |                |                     |                     |
	| start   | -p addons-336680 --wait=true                                                                | addons-336680        | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:58 UTC | 27 Mar 24 20:00 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |                |                     |                     |
	|         | --addons=registry                                                                           |                      |         |                |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |                |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |                |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |                |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |                |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |                |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |                |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |                |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |                |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |                |                     |                     |
	| ssh     | addons-336680 ssh cat                                                                       | addons-336680        | jenkins | v1.33.0-beta.0 | 27 Mar 24 20:00 UTC | 27 Mar 24 20:00 UTC |
	|         | /opt/local-path-provisioner/pvc-3cfedee6-16e7-4404-b203-2e35fc3cfb1a_default_test-pvc/file1 |                      |         |                |                     |                     |
	| addons  | addons-336680 addons disable                                                                | addons-336680        | jenkins | v1.33.0-beta.0 | 27 Mar 24 20:00 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| ip      | addons-336680 ip                                                                            | addons-336680        | jenkins | v1.33.0-beta.0 | 27 Mar 24 20:00 UTC | 27 Mar 24 20:00 UTC |
	| addons  | addons-336680 addons disable                                                                | addons-336680        | jenkins | v1.33.0-beta.0 | 27 Mar 24 20:00 UTC | 27 Mar 24 20:00 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| addons  | addons-336680 addons                                                                        | addons-336680        | jenkins | v1.33.0-beta.0 | 27 Mar 24 20:01 UTC | 27 Mar 24 20:01 UTC |
	|         | disable metrics-server                                                                      |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | addons-336680 addons disable                                                                | addons-336680        | jenkins | v1.33.0-beta.0 | 27 Mar 24 20:01 UTC | 27 Mar 24 20:01 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-336680        | jenkins | v1.33.0-beta.0 | 27 Mar 24 20:01 UTC | 27 Mar 24 20:01 UTC |
	|         | addons-336680                                                                               |                      |         |                |                     |                     |
	| ssh     | addons-336680 ssh curl -s                                                                   | addons-336680        | jenkins | v1.33.0-beta.0 | 27 Mar 24 20:01 UTC | 27 Mar 24 20:01 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |                |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |                |                     |                     |
	| ip      | addons-336680 ip                                                                            | addons-336680        | jenkins | v1.33.0-beta.0 | 27 Mar 24 20:01 UTC | 27 Mar 24 20:01 UTC |
	| addons  | addons-336680 addons disable                                                                | addons-336680        | jenkins | v1.33.0-beta.0 | 27 Mar 24 20:01 UTC | 27 Mar 24 20:01 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| addons  | addons-336680 addons disable                                                                | addons-336680        | jenkins | v1.33.0-beta.0 | 27 Mar 24 20:01 UTC | 27 Mar 24 20:01 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |                |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-336680        | jenkins | v1.33.0-beta.0 | 27 Mar 24 20:01 UTC | 27 Mar 24 20:01 UTC |
	|         | -p addons-336680                                                                            |                      |         |                |                     |                     |
	| addons  | enable headlamp                                                                             | addons-336680        | jenkins | v1.33.0-beta.0 | 27 Mar 24 20:01 UTC | 27 Mar 24 20:01 UTC |
	|         | -p addons-336680                                                                            |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-336680        | jenkins | v1.33.0-beta.0 | 27 Mar 24 20:01 UTC |                     |
	|         | addons-336680                                                                               |                      |         |                |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 19:58:24
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 19:58:24.447032  440685 out.go:291] Setting OutFile to fd 1 ...
	I0327 19:58:24.447143  440685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 19:58:24.447153  440685 out.go:304] Setting ErrFile to fd 2...
	I0327 19:58:24.447157  440685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 19:58:24.447336  440685 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-432634/.minikube/bin
	I0327 19:58:24.447981  440685 out.go:298] Setting JSON to false
	I0327 19:58:24.448835  440685 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":13257,"bootTime":1711556248,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0327 19:58:24.448906  440685 start.go:139] virtualization: kvm guest
	I0327 19:58:24.451023  440685 out.go:177] * [addons-336680] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0327 19:58:24.452324  440685 out.go:177]   - MINIKUBE_LOCATION=17735
	I0327 19:58:24.453733  440685 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 19:58:24.452368  440685 notify.go:220] Checking for updates...
	I0327 19:58:24.455189  440685 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17735-432634/kubeconfig
	I0327 19:58:24.456414  440685 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-432634/.minikube
	I0327 19:58:24.457638  440685 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0327 19:58:24.458842  440685 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 19:58:24.460172  440685 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 19:58:24.491174  440685 out.go:177] * Using the kvm2 driver based on user configuration
	I0327 19:58:24.492591  440685 start.go:297] selected driver: kvm2
	I0327 19:58:24.492604  440685 start.go:901] validating driver "kvm2" against <nil>
	I0327 19:58:24.492615  440685 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 19:58:24.493261  440685 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 19:58:24.493323  440685 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17735-432634/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0327 19:58:24.507845  440685 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0327 19:58:24.507896  440685 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 19:58:24.508122  440685 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 19:58:24.508182  440685 cni.go:84] Creating CNI manager for ""
	I0327 19:58:24.508195  440685 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0327 19:58:24.508204  440685 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 19:58:24.508252  440685 start.go:340] cluster config:
	{Name:addons-336680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-336680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISock
et: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I0327 19:58:24.508353  440685 iso.go:125] acquiring lock: {Name:mk6bbc35a3ce9b9a38f627b62192ef8de7c8520d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 19:58:24.510254  440685 out.go:177] * Starting "addons-336680" primary control-plane node in "addons-336680" cluster
	I0327 19:58:24.511505  440685 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0327 19:58:24.511543  440685 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17735-432634/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4
	I0327 19:58:24.511563  440685 cache.go:56] Caching tarball of preloaded images
	I0327 19:58:24.511659  440685 preload.go:173] Found /home/jenkins/minikube-integration/17735-432634/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0327 19:58:24.511671  440685 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on containerd
	I0327 19:58:24.512042  440685 profile.go:143] Saving config to /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/config.json ...
	I0327 19:58:24.512069  440685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/config.json: {Name:mk4de75e21eded41ae63b1a6f7d2e67d0063ba76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 19:58:24.512252  440685 start.go:360] acquireMachinesLock for addons-336680: {Name:mkb770ea7f020bc44e80b2f3aef63a41395a3311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 19:58:24.512318  440685 start.go:364] duration metric: took 48.536µs to acquireMachinesLock for "addons-336680"
	I0327 19:58:24.512340  440685 start.go:93] Provisioning new machine with config: &{Name:addons-336680 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-
336680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0327 19:58:24.512416  440685 start.go:125] createHost starting for "" (driver="kvm2")
	I0327 19:58:24.514111  440685 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0327 19:58:24.514256  440685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 19:58:24.514316  440685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 19:58:24.528881  440685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45677
	I0327 19:58:24.529440  440685 main.go:141] libmachine: () Calling .GetVersion
	I0327 19:58:24.530140  440685 main.go:141] libmachine: Using API Version  1
	I0327 19:58:24.530167  440685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 19:58:24.530566  440685 main.go:141] libmachine: () Calling .GetMachineName
	I0327 19:58:24.530795  440685 main.go:141] libmachine: (addons-336680) Calling .GetMachineName
	I0327 19:58:24.530945  440685 main.go:141] libmachine: (addons-336680) Calling .DriverName
	I0327 19:58:24.531084  440685 start.go:159] libmachine.API.Create for "addons-336680" (driver="kvm2")
	I0327 19:58:24.531114  440685 client.go:168] LocalClient.Create starting
	I0327 19:58:24.531185  440685 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17735-432634/.minikube/certs/ca.pem
	I0327 19:58:24.703666  440685 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17735-432634/.minikube/certs/cert.pem
	I0327 19:58:24.855385  440685 main.go:141] libmachine: Running pre-create checks...
	I0327 19:58:24.855413  440685 main.go:141] libmachine: (addons-336680) Calling .PreCreateCheck
	I0327 19:58:24.856002  440685 main.go:141] libmachine: (addons-336680) Calling .GetConfigRaw
	I0327 19:58:24.856540  440685 main.go:141] libmachine: Creating machine...
	I0327 19:58:24.856557  440685 main.go:141] libmachine: (addons-336680) Calling .Create
	I0327 19:58:24.856730  440685 main.go:141] libmachine: (addons-336680) Creating KVM machine...
	I0327 19:58:24.858112  440685 main.go:141] libmachine: (addons-336680) DBG | found existing default KVM network
	I0327 19:58:24.858957  440685 main.go:141] libmachine: (addons-336680) DBG | I0327 19:58:24.858788  440707 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0327 19:58:24.859051  440685 main.go:141] libmachine: (addons-336680) DBG | created network xml: 
	I0327 19:58:24.859080  440685 main.go:141] libmachine: (addons-336680) DBG | <network>
	I0327 19:58:24.859091  440685 main.go:141] libmachine: (addons-336680) DBG |   <name>mk-addons-336680</name>
	I0327 19:58:24.859104  440685 main.go:141] libmachine: (addons-336680) DBG |   <dns enable='no'/>
	I0327 19:58:24.859114  440685 main.go:141] libmachine: (addons-336680) DBG |   
	I0327 19:58:24.859127  440685 main.go:141] libmachine: (addons-336680) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0327 19:58:24.859139  440685 main.go:141] libmachine: (addons-336680) DBG |     <dhcp>
	I0327 19:58:24.859152  440685 main.go:141] libmachine: (addons-336680) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0327 19:58:24.859161  440685 main.go:141] libmachine: (addons-336680) DBG |     </dhcp>
	I0327 19:58:24.859174  440685 main.go:141] libmachine: (addons-336680) DBG |   </ip>
	I0327 19:58:24.859187  440685 main.go:141] libmachine: (addons-336680) DBG |   
	I0327 19:58:24.859198  440685 main.go:141] libmachine: (addons-336680) DBG | </network>
	I0327 19:58:24.859212  440685 main.go:141] libmachine: (addons-336680) DBG | 
	I0327 19:58:24.864598  440685 main.go:141] libmachine: (addons-336680) DBG | trying to create private KVM network mk-addons-336680 192.168.39.0/24...
	I0327 19:58:24.930589  440685 main.go:141] libmachine: (addons-336680) DBG | private KVM network mk-addons-336680 192.168.39.0/24 created
	I0327 19:58:24.930622  440685 main.go:141] libmachine: (addons-336680) Setting up store path in /home/jenkins/minikube-integration/17735-432634/.minikube/machines/addons-336680 ...
	I0327 19:58:24.930646  440685 main.go:141] libmachine: (addons-336680) DBG | I0327 19:58:24.930563  440707 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17735-432634/.minikube
	I0327 19:58:24.930663  440685 main.go:141] libmachine: (addons-336680) Building disk image from file:///home/jenkins/minikube-integration/17735-432634/.minikube/cache/iso/amd64/minikube-v1.33.0-beta.0-amd64.iso
	I0327 19:58:24.930786  440685 main.go:141] libmachine: (addons-336680) Downloading /home/jenkins/minikube-integration/17735-432634/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17735-432634/.minikube/cache/iso/amd64/minikube-v1.33.0-beta.0-amd64.iso...
	I0327 19:58:25.191768  440685 main.go:141] libmachine: (addons-336680) DBG | I0327 19:58:25.191637  440707 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17735-432634/.minikube/machines/addons-336680/id_rsa...
	I0327 19:58:25.296021  440685 main.go:141] libmachine: (addons-336680) DBG | I0327 19:58:25.295843  440707 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17735-432634/.minikube/machines/addons-336680/addons-336680.rawdisk...
	I0327 19:58:25.296062  440685 main.go:141] libmachine: (addons-336680) DBG | Writing magic tar header
	I0327 19:58:25.296105  440685 main.go:141] libmachine: (addons-336680) DBG | Writing SSH key tar header
	I0327 19:58:25.296126  440685 main.go:141] libmachine: (addons-336680) DBG | I0327 19:58:25.296077  440707 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17735-432634/.minikube/machines/addons-336680 ...
	I0327 19:58:25.296239  440685 main.go:141] libmachine: (addons-336680) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17735-432634/.minikube/machines/addons-336680
	I0327 19:58:25.296279  440685 main.go:141] libmachine: (addons-336680) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17735-432634/.minikube/machines
	I0327 19:58:25.296298  440685 main.go:141] libmachine: (addons-336680) Setting executable bit set on /home/jenkins/minikube-integration/17735-432634/.minikube/machines/addons-336680 (perms=drwx------)
	I0327 19:58:25.296313  440685 main.go:141] libmachine: (addons-336680) Setting executable bit set on /home/jenkins/minikube-integration/17735-432634/.minikube/machines (perms=drwxr-xr-x)
	I0327 19:58:25.296338  440685 main.go:141] libmachine: (addons-336680) Setting executable bit set on /home/jenkins/minikube-integration/17735-432634/.minikube (perms=drwxr-xr-x)
	I0327 19:58:25.296351  440685 main.go:141] libmachine: (addons-336680) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17735-432634/.minikube
	I0327 19:58:25.296366  440685 main.go:141] libmachine: (addons-336680) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17735-432634
	I0327 19:58:25.296378  440685 main.go:141] libmachine: (addons-336680) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0327 19:58:25.296392  440685 main.go:141] libmachine: (addons-336680) Setting executable bit set on /home/jenkins/minikube-integration/17735-432634 (perms=drwxrwxr-x)
	I0327 19:58:25.296412  440685 main.go:141] libmachine: (addons-336680) DBG | Checking permissions on dir: /home/jenkins
	I0327 19:58:25.296424  440685 main.go:141] libmachine: (addons-336680) DBG | Checking permissions on dir: /home
	I0327 19:58:25.296437  440685 main.go:141] libmachine: (addons-336680) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0327 19:58:25.296450  440685 main.go:141] libmachine: (addons-336680) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0327 19:58:25.296459  440685 main.go:141] libmachine: (addons-336680) Creating domain...
	I0327 19:58:25.296475  440685 main.go:141] libmachine: (addons-336680) DBG | Skipping /home - not owner
	I0327 19:58:25.297836  440685 main.go:141] libmachine: (addons-336680) define libvirt domain using xml: 
	I0327 19:58:25.297866  440685 main.go:141] libmachine: (addons-336680) <domain type='kvm'>
	I0327 19:58:25.297875  440685 main.go:141] libmachine: (addons-336680)   <name>addons-336680</name>
	I0327 19:58:25.297896  440685 main.go:141] libmachine: (addons-336680)   <memory unit='MiB'>4000</memory>
	I0327 19:58:25.297913  440685 main.go:141] libmachine: (addons-336680)   <vcpu>2</vcpu>
	I0327 19:58:25.297919  440685 main.go:141] libmachine: (addons-336680)   <features>
	I0327 19:58:25.297925  440685 main.go:141] libmachine: (addons-336680)     <acpi/>
	I0327 19:58:25.297933  440685 main.go:141] libmachine: (addons-336680)     <apic/>
	I0327 19:58:25.297941  440685 main.go:141] libmachine: (addons-336680)     <pae/>
	I0327 19:58:25.297966  440685 main.go:141] libmachine: (addons-336680)     
	I0327 19:58:25.297980  440685 main.go:141] libmachine: (addons-336680)   </features>
	I0327 19:58:25.297988  440685 main.go:141] libmachine: (addons-336680)   <cpu mode='host-passthrough'>
	I0327 19:58:25.297996  440685 main.go:141] libmachine: (addons-336680)   
	I0327 19:58:25.298021  440685 main.go:141] libmachine: (addons-336680)   </cpu>
	I0327 19:58:25.298032  440685 main.go:141] libmachine: (addons-336680)   <os>
	I0327 19:58:25.298041  440685 main.go:141] libmachine: (addons-336680)     <type>hvm</type>
	I0327 19:58:25.298050  440685 main.go:141] libmachine: (addons-336680)     <boot dev='cdrom'/>
	I0327 19:58:25.298064  440685 main.go:141] libmachine: (addons-336680)     <boot dev='hd'/>
	I0327 19:58:25.298075  440685 main.go:141] libmachine: (addons-336680)     <bootmenu enable='no'/>
	I0327 19:58:25.298084  440685 main.go:141] libmachine: (addons-336680)   </os>
	I0327 19:58:25.298091  440685 main.go:141] libmachine: (addons-336680)   <devices>
	I0327 19:58:25.298101  440685 main.go:141] libmachine: (addons-336680)     <disk type='file' device='cdrom'>
	I0327 19:58:25.298126  440685 main.go:141] libmachine: (addons-336680)       <source file='/home/jenkins/minikube-integration/17735-432634/.minikube/machines/addons-336680/boot2docker.iso'/>
	I0327 19:58:25.298136  440685 main.go:141] libmachine: (addons-336680)       <target dev='hdc' bus='scsi'/>
	I0327 19:58:25.298169  440685 main.go:141] libmachine: (addons-336680)       <readonly/>
	I0327 19:58:25.298191  440685 main.go:141] libmachine: (addons-336680)     </disk>
	I0327 19:58:25.298203  440685 main.go:141] libmachine: (addons-336680)     <disk type='file' device='disk'>
	I0327 19:58:25.298215  440685 main.go:141] libmachine: (addons-336680)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0327 19:58:25.298234  440685 main.go:141] libmachine: (addons-336680)       <source file='/home/jenkins/minikube-integration/17735-432634/.minikube/machines/addons-336680/addons-336680.rawdisk'/>
	I0327 19:58:25.298252  440685 main.go:141] libmachine: (addons-336680)       <target dev='hda' bus='virtio'/>
	I0327 19:58:25.298265  440685 main.go:141] libmachine: (addons-336680)     </disk>
	I0327 19:58:25.298277  440685 main.go:141] libmachine: (addons-336680)     <interface type='network'>
	I0327 19:58:25.298291  440685 main.go:141] libmachine: (addons-336680)       <source network='mk-addons-336680'/>
	I0327 19:58:25.298303  440685 main.go:141] libmachine: (addons-336680)       <model type='virtio'/>
	I0327 19:58:25.298312  440685 main.go:141] libmachine: (addons-336680)     </interface>
	I0327 19:58:25.298340  440685 main.go:141] libmachine: (addons-336680)     <interface type='network'>
	I0327 19:58:25.298365  440685 main.go:141] libmachine: (addons-336680)       <source network='default'/>
	I0327 19:58:25.298375  440685 main.go:141] libmachine: (addons-336680)       <model type='virtio'/>
	I0327 19:58:25.298382  440685 main.go:141] libmachine: (addons-336680)     </interface>
	I0327 19:58:25.298390  440685 main.go:141] libmachine: (addons-336680)     <serial type='pty'>
	I0327 19:58:25.298396  440685 main.go:141] libmachine: (addons-336680)       <target port='0'/>
	I0327 19:58:25.298404  440685 main.go:141] libmachine: (addons-336680)     </serial>
	I0327 19:58:25.298409  440685 main.go:141] libmachine: (addons-336680)     <console type='pty'>
	I0327 19:58:25.298418  440685 main.go:141] libmachine: (addons-336680)       <target type='serial' port='0'/>
	I0327 19:58:25.298429  440685 main.go:141] libmachine: (addons-336680)     </console>
	I0327 19:58:25.298457  440685 main.go:141] libmachine: (addons-336680)     <rng model='virtio'>
	I0327 19:58:25.298482  440685 main.go:141] libmachine: (addons-336680)       <backend model='random'>/dev/random</backend>
	I0327 19:58:25.298490  440685 main.go:141] libmachine: (addons-336680)     </rng>
	I0327 19:58:25.298500  440685 main.go:141] libmachine: (addons-336680)     
	I0327 19:58:25.298509  440685 main.go:141] libmachine: (addons-336680)     
	I0327 19:58:25.298519  440685 main.go:141] libmachine: (addons-336680)   </devices>
	I0327 19:58:25.298528  440685 main.go:141] libmachine: (addons-336680) </domain>
	I0327 19:58:25.298538  440685 main.go:141] libmachine: (addons-336680) 
	I0327 19:58:25.303058  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:c8:5a:b3 in network default
	I0327 19:58:25.303715  440685 main.go:141] libmachine: (addons-336680) Ensuring networks are active...
	I0327 19:58:25.303741  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:25.304412  440685 main.go:141] libmachine: (addons-336680) Ensuring network default is active
	I0327 19:58:25.304908  440685 main.go:141] libmachine: (addons-336680) Ensuring network mk-addons-336680 is active
	I0327 19:58:25.305564  440685 main.go:141] libmachine: (addons-336680) Getting domain xml...
	I0327 19:58:25.306665  440685 main.go:141] libmachine: (addons-336680) Creating domain...
	I0327 19:58:25.630680  440685 main.go:141] libmachine: (addons-336680) Waiting to get IP...
	I0327 19:58:25.631740  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:25.632282  440685 main.go:141] libmachine: (addons-336680) DBG | unable to find current IP address of domain addons-336680 in network mk-addons-336680
	I0327 19:58:25.632311  440685 main.go:141] libmachine: (addons-336680) DBG | I0327 19:58:25.632244  440707 retry.go:31] will retry after 244.133376ms: waiting for machine to come up
	I0327 19:58:25.877903  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:25.878372  440685 main.go:141] libmachine: (addons-336680) DBG | unable to find current IP address of domain addons-336680 in network mk-addons-336680
	I0327 19:58:25.878407  440685 main.go:141] libmachine: (addons-336680) DBG | I0327 19:58:25.878325  440707 retry.go:31] will retry after 324.461083ms: waiting for machine to come up
	I0327 19:58:26.204914  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:26.205328  440685 main.go:141] libmachine: (addons-336680) DBG | unable to find current IP address of domain addons-336680 in network mk-addons-336680
	I0327 19:58:26.205356  440685 main.go:141] libmachine: (addons-336680) DBG | I0327 19:58:26.205298  440707 retry.go:31] will retry after 392.665703ms: waiting for machine to come up
	I0327 19:58:26.599984  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:26.600507  440685 main.go:141] libmachine: (addons-336680) DBG | unable to find current IP address of domain addons-336680 in network mk-addons-336680
	I0327 19:58:26.600535  440685 main.go:141] libmachine: (addons-336680) DBG | I0327 19:58:26.600456  440707 retry.go:31] will retry after 393.162839ms: waiting for machine to come up
	I0327 19:58:26.995296  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:26.995825  440685 main.go:141] libmachine: (addons-336680) DBG | unable to find current IP address of domain addons-336680 in network mk-addons-336680
	I0327 19:58:26.995856  440685 main.go:141] libmachine: (addons-336680) DBG | I0327 19:58:26.995758  440707 retry.go:31] will retry after 468.613069ms: waiting for machine to come up
	I0327 19:58:27.466491  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:27.466979  440685 main.go:141] libmachine: (addons-336680) DBG | unable to find current IP address of domain addons-336680 in network mk-addons-336680
	I0327 19:58:27.467010  440685 main.go:141] libmachine: (addons-336680) DBG | I0327 19:58:27.466927  440707 retry.go:31] will retry after 830.596222ms: waiting for machine to come up
	I0327 19:58:28.299158  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:28.299571  440685 main.go:141] libmachine: (addons-336680) DBG | unable to find current IP address of domain addons-336680 in network mk-addons-336680
	I0327 19:58:28.299623  440685 main.go:141] libmachine: (addons-336680) DBG | I0327 19:58:28.299517  440707 retry.go:31] will retry after 741.146736ms: waiting for machine to come up
	I0327 19:58:29.042447  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:29.042902  440685 main.go:141] libmachine: (addons-336680) DBG | unable to find current IP address of domain addons-336680 in network mk-addons-336680
	I0327 19:58:29.042934  440685 main.go:141] libmachine: (addons-336680) DBG | I0327 19:58:29.042821  440707 retry.go:31] will retry after 1.097090951s: waiting for machine to come up
	I0327 19:58:30.141818  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:30.142286  440685 main.go:141] libmachine: (addons-336680) DBG | unable to find current IP address of domain addons-336680 in network mk-addons-336680
	I0327 19:58:30.142366  440685 main.go:141] libmachine: (addons-336680) DBG | I0327 19:58:30.142260  440707 retry.go:31] will retry after 1.653335562s: waiting for machine to come up
	I0327 19:58:31.798390  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:31.798882  440685 main.go:141] libmachine: (addons-336680) DBG | unable to find current IP address of domain addons-336680 in network mk-addons-336680
	I0327 19:58:31.798917  440685 main.go:141] libmachine: (addons-336680) DBG | I0327 19:58:31.798812  440707 retry.go:31] will retry after 1.563845982s: waiting for machine to come up
	I0327 19:58:33.364812  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:33.365210  440685 main.go:141] libmachine: (addons-336680) DBG | unable to find current IP address of domain addons-336680 in network mk-addons-336680
	I0327 19:58:33.365240  440685 main.go:141] libmachine: (addons-336680) DBG | I0327 19:58:33.365156  440707 retry.go:31] will retry after 2.841472372s: waiting for machine to come up
	I0327 19:58:36.209001  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:36.209451  440685 main.go:141] libmachine: (addons-336680) DBG | unable to find current IP address of domain addons-336680 in network mk-addons-336680
	I0327 19:58:36.209484  440685 main.go:141] libmachine: (addons-336680) DBG | I0327 19:58:36.209392  440707 retry.go:31] will retry after 3.31778855s: waiting for machine to come up
	I0327 19:58:39.530960  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:39.531284  440685 main.go:141] libmachine: (addons-336680) DBG | unable to find current IP address of domain addons-336680 in network mk-addons-336680
	I0327 19:58:39.531311  440685 main.go:141] libmachine: (addons-336680) DBG | I0327 19:58:39.531254  440707 retry.go:31] will retry after 4.147497832s: waiting for machine to come up
	I0327 19:58:43.685577  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:43.686053  440685 main.go:141] libmachine: (addons-336680) DBG | unable to find current IP address of domain addons-336680 in network mk-addons-336680
	I0327 19:58:43.686087  440685 main.go:141] libmachine: (addons-336680) DBG | I0327 19:58:43.685997  440707 retry.go:31] will retry after 3.999127126s: waiting for machine to come up
	I0327 19:58:47.688996  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:47.689439  440685 main.go:141] libmachine: (addons-336680) Found IP for machine: 192.168.39.8
	I0327 19:58:47.689470  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has current primary IP address 192.168.39.8 and MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:47.689477  440685 main.go:141] libmachine: (addons-336680) Reserving static IP address...
	I0327 19:58:47.689903  440685 main.go:141] libmachine: (addons-336680) DBG | unable to find host DHCP lease matching {name: "addons-336680", mac: "52:54:00:f6:b3:dd", ip: "192.168.39.8"} in network mk-addons-336680
	I0327 19:58:47.761650  440685 main.go:141] libmachine: (addons-336680) DBG | Getting to WaitForSSH function...
	I0327 19:58:47.761688  440685 main.go:141] libmachine: (addons-336680) Reserved static IP address: 192.168.39.8
	I0327 19:58:47.761703  440685 main.go:141] libmachine: (addons-336680) Waiting for SSH to be available...
	I0327 19:58:47.764575  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:47.764987  440685 main.go:141] libmachine: (addons-336680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:b3:dd", ip: ""} in network mk-addons-336680: {Iface:virbr1 ExpiryTime:2024-03-27 20:58:39 +0000 UTC Type:0 Mac:52:54:00:f6:b3:dd Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f6:b3:dd}
	I0327 19:58:47.765019  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined IP address 192.168.39.8 and MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:47.765141  440685 main.go:141] libmachine: (addons-336680) DBG | Using SSH client type: external
	I0327 19:58:47.765176  440685 main.go:141] libmachine: (addons-336680) DBG | Using SSH private key: /home/jenkins/minikube-integration/17735-432634/.minikube/machines/addons-336680/id_rsa (-rw-------)
	I0327 19:58:47.765257  440685 main.go:141] libmachine: (addons-336680) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.8 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17735-432634/.minikube/machines/addons-336680/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0327 19:58:47.765284  440685 main.go:141] libmachine: (addons-336680) DBG | About to run SSH command:
	I0327 19:58:47.765300  440685 main.go:141] libmachine: (addons-336680) DBG | exit 0
	I0327 19:58:47.891996  440685 main.go:141] libmachine: (addons-336680) DBG | SSH cmd err, output: <nil>: 
	I0327 19:58:47.892323  440685 main.go:141] libmachine: (addons-336680) KVM machine creation complete!
	I0327 19:58:47.892710  440685 main.go:141] libmachine: (addons-336680) Calling .GetConfigRaw
	I0327 19:58:47.893459  440685 main.go:141] libmachine: (addons-336680) Calling .DriverName
	I0327 19:58:47.893648  440685 main.go:141] libmachine: (addons-336680) Calling .DriverName
	I0327 19:58:47.893811  440685 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0327 19:58:47.893828  440685 main.go:141] libmachine: (addons-336680) Calling .GetState
	I0327 19:58:47.895239  440685 main.go:141] libmachine: Detecting operating system of created instance...
	I0327 19:58:47.895290  440685 main.go:141] libmachine: Waiting for SSH to be available...
	I0327 19:58:47.895297  440685 main.go:141] libmachine: Getting to WaitForSSH function...
	I0327 19:58:47.895304  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHHostname
	I0327 19:58:47.897781  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:47.898204  440685 main.go:141] libmachine: (addons-336680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:b3:dd", ip: ""} in network mk-addons-336680: {Iface:virbr1 ExpiryTime:2024-03-27 20:58:39 +0000 UTC Type:0 Mac:52:54:00:f6:b3:dd Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-336680 Clientid:01:52:54:00:f6:b3:dd}
	I0327 19:58:47.898235  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined IP address 192.168.39.8 and MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:47.898353  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHPort
	I0327 19:58:47.898528  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHKeyPath
	I0327 19:58:47.898671  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHKeyPath
	I0327 19:58:47.898782  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHUsername
	I0327 19:58:47.898965  440685 main.go:141] libmachine: Using SSH client type: native
	I0327 19:58:47.899173  440685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0327 19:58:47.899184  440685 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0327 19:58:48.003541  440685 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 19:58:48.003568  440685 main.go:141] libmachine: Detecting the provisioner...
	I0327 19:58:48.003606  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHHostname
	I0327 19:58:48.006816  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:48.007258  440685 main.go:141] libmachine: (addons-336680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:b3:dd", ip: ""} in network mk-addons-336680: {Iface:virbr1 ExpiryTime:2024-03-27 20:58:39 +0000 UTC Type:0 Mac:52:54:00:f6:b3:dd Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-336680 Clientid:01:52:54:00:f6:b3:dd}
	I0327 19:58:48.007283  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined IP address 192.168.39.8 and MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:48.007492  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHPort
	I0327 19:58:48.007728  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHKeyPath
	I0327 19:58:48.007912  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHKeyPath
	I0327 19:58:48.008050  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHUsername
	I0327 19:58:48.008196  440685 main.go:141] libmachine: Using SSH client type: native
	I0327 19:58:48.008394  440685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0327 19:58:48.008407  440685 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0327 19:58:48.117198  440685 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0327 19:58:48.117276  440685 main.go:141] libmachine: found compatible host: buildroot
	I0327 19:58:48.117285  440685 main.go:141] libmachine: Provisioning with buildroot...
	I0327 19:58:48.117293  440685 main.go:141] libmachine: (addons-336680) Calling .GetMachineName
	I0327 19:58:48.117691  440685 buildroot.go:166] provisioning hostname "addons-336680"
	I0327 19:58:48.117736  440685 main.go:141] libmachine: (addons-336680) Calling .GetMachineName
	I0327 19:58:48.117985  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHHostname
	I0327 19:58:48.120874  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:48.121279  440685 main.go:141] libmachine: (addons-336680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:b3:dd", ip: ""} in network mk-addons-336680: {Iface:virbr1 ExpiryTime:2024-03-27 20:58:39 +0000 UTC Type:0 Mac:52:54:00:f6:b3:dd Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-336680 Clientid:01:52:54:00:f6:b3:dd}
	I0327 19:58:48.121309  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined IP address 192.168.39.8 and MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:48.121556  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHPort
	I0327 19:58:48.121737  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHKeyPath
	I0327 19:58:48.121880  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHKeyPath
	I0327 19:58:48.122125  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHUsername
	I0327 19:58:48.122347  440685 main.go:141] libmachine: Using SSH client type: native
	I0327 19:58:48.122602  440685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0327 19:58:48.122620  440685 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-336680 && echo "addons-336680" | sudo tee /etc/hostname
	I0327 19:58:48.244019  440685 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-336680
	
	I0327 19:58:48.244062  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHHostname
	I0327 19:58:48.247028  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:48.247306  440685 main.go:141] libmachine: (addons-336680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:b3:dd", ip: ""} in network mk-addons-336680: {Iface:virbr1 ExpiryTime:2024-03-27 20:58:39 +0000 UTC Type:0 Mac:52:54:00:f6:b3:dd Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-336680 Clientid:01:52:54:00:f6:b3:dd}
	I0327 19:58:48.247331  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined IP address 192.168.39.8 and MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:48.247522  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHPort
	I0327 19:58:48.247715  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHKeyPath
	I0327 19:58:48.247995  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHKeyPath
	I0327 19:58:48.248177  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHUsername
	I0327 19:58:48.248436  440685 main.go:141] libmachine: Using SSH client type: native
	I0327 19:58:48.248663  440685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0327 19:58:48.248692  440685 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-336680' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-336680/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-336680' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0327 19:58:48.366616  440685 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 19:58:48.366649  440685 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17735-432634/.minikube CaCertPath:/home/jenkins/minikube-integration/17735-432634/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17735-432634/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17735-432634/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17735-432634/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17735-432634/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17735-432634/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17735-432634/.minikube}
	I0327 19:58:48.366667  440685 buildroot.go:174] setting up certificates
	I0327 19:58:48.366681  440685 provision.go:84] configureAuth start
	I0327 19:58:48.366692  440685 main.go:141] libmachine: (addons-336680) Calling .GetMachineName
	I0327 19:58:48.367067  440685 main.go:141] libmachine: (addons-336680) Calling .GetIP
	I0327 19:58:48.370050  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:48.370506  440685 main.go:141] libmachine: (addons-336680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:b3:dd", ip: ""} in network mk-addons-336680: {Iface:virbr1 ExpiryTime:2024-03-27 20:58:39 +0000 UTC Type:0 Mac:52:54:00:f6:b3:dd Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-336680 Clientid:01:52:54:00:f6:b3:dd}
	I0327 19:58:48.370534  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined IP address 192.168.39.8 and MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:48.370669  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHHostname
	I0327 19:58:48.373712  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:48.374128  440685 main.go:141] libmachine: (addons-336680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:b3:dd", ip: ""} in network mk-addons-336680: {Iface:virbr1 ExpiryTime:2024-03-27 20:58:39 +0000 UTC Type:0 Mac:52:54:00:f6:b3:dd Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-336680 Clientid:01:52:54:00:f6:b3:dd}
	I0327 19:58:48.374161  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined IP address 192.168.39.8 and MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:48.374293  440685 provision.go:143] copyHostCerts
	I0327 19:58:48.374436  440685 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17735-432634/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17735-432634/.minikube/ca.pem (1082 bytes)
	I0327 19:58:48.374581  440685 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17735-432634/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17735-432634/.minikube/cert.pem (1123 bytes)
	I0327 19:58:48.374658  440685 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17735-432634/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17735-432634/.minikube/key.pem (1675 bytes)
	I0327 19:58:48.374726  440685 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17735-432634/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17735-432634/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17735-432634/.minikube/certs/ca-key.pem org=jenkins.addons-336680 san=[127.0.0.1 192.168.39.8 addons-336680 localhost minikube]
	I0327 19:58:48.673140  440685 provision.go:177] copyRemoteCerts
	I0327 19:58:48.673228  440685 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0327 19:58:48.673255  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHHostname
	I0327 19:58:48.676525  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:48.677045  440685 main.go:141] libmachine: (addons-336680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:b3:dd", ip: ""} in network mk-addons-336680: {Iface:virbr1 ExpiryTime:2024-03-27 20:58:39 +0000 UTC Type:0 Mac:52:54:00:f6:b3:dd Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-336680 Clientid:01:52:54:00:f6:b3:dd}
	I0327 19:58:48.677111  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined IP address 192.168.39.8 and MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:48.677424  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHPort
	I0327 19:58:48.677895  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHKeyPath
	I0327 19:58:48.678297  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHUsername
	I0327 19:58:48.678639  440685 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17735-432634/.minikube/machines/addons-336680/id_rsa Username:docker}
	I0327 19:58:48.763344  440685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-432634/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0327 19:58:48.792556  440685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-432634/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0327 19:58:48.821469  440685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-432634/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0327 19:58:48.849157  440685 provision.go:87] duration metric: took 482.456163ms to configureAuth
	I0327 19:58:48.849191  440685 buildroot.go:189] setting minikube options for container-runtime
	I0327 19:58:48.849416  440685 config.go:182] Loaded profile config "addons-336680": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 19:58:48.849449  440685 main.go:141] libmachine: Checking connection to Docker...
	I0327 19:58:48.849462  440685 main.go:141] libmachine: (addons-336680) Calling .GetURL
	I0327 19:58:48.850926  440685 main.go:141] libmachine: (addons-336680) DBG | Using libvirt version 6000000
	I0327 19:58:48.853410  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:48.853831  440685 main.go:141] libmachine: (addons-336680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:b3:dd", ip: ""} in network mk-addons-336680: {Iface:virbr1 ExpiryTime:2024-03-27 20:58:39 +0000 UTC Type:0 Mac:52:54:00:f6:b3:dd Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-336680 Clientid:01:52:54:00:f6:b3:dd}
	I0327 19:58:48.853863  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined IP address 192.168.39.8 and MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:48.854042  440685 main.go:141] libmachine: Docker is up and running!
	I0327 19:58:48.854064  440685 main.go:141] libmachine: Reticulating splines...
	I0327 19:58:48.854074  440685 client.go:171] duration metric: took 24.322949176s to LocalClient.Create
	I0327 19:58:48.854107  440685 start.go:167] duration metric: took 24.323022826s to libmachine.API.Create "addons-336680"
	I0327 19:58:48.854123  440685 start.go:293] postStartSetup for "addons-336680" (driver="kvm2")
	I0327 19:58:48.854139  440685 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0327 19:58:48.854165  440685 main.go:141] libmachine: (addons-336680) Calling .DriverName
	I0327 19:58:48.854424  440685 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0327 19:58:48.854453  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHHostname
	I0327 19:58:48.857004  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:48.857330  440685 main.go:141] libmachine: (addons-336680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:b3:dd", ip: ""} in network mk-addons-336680: {Iface:virbr1 ExpiryTime:2024-03-27 20:58:39 +0000 UTC Type:0 Mac:52:54:00:f6:b3:dd Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-336680 Clientid:01:52:54:00:f6:b3:dd}
	I0327 19:58:48.857354  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined IP address 192.168.39.8 and MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:48.857529  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHPort
	I0327 19:58:48.857739  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHKeyPath
	I0327 19:58:48.857912  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHUsername
	I0327 19:58:48.858082  440685 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17735-432634/.minikube/machines/addons-336680/id_rsa Username:docker}
	I0327 19:58:48.943977  440685 ssh_runner.go:195] Run: cat /etc/os-release
	I0327 19:58:48.949839  440685 info.go:137] Remote host: Buildroot 2023.02.9
	I0327 19:58:48.949889  440685 filesync.go:126] Scanning /home/jenkins/minikube-integration/17735-432634/.minikube/addons for local assets ...
	I0327 19:58:48.949963  440685 filesync.go:126] Scanning /home/jenkins/minikube-integration/17735-432634/.minikube/files for local assets ...
	I0327 19:58:48.949987  440685 start.go:296] duration metric: took 95.856073ms for postStartSetup
	I0327 19:58:48.950022  440685 main.go:141] libmachine: (addons-336680) Calling .GetConfigRaw
	I0327 19:58:48.951727  440685 main.go:141] libmachine: (addons-336680) Calling .GetIP
	I0327 19:58:48.954381  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:48.954774  440685 main.go:141] libmachine: (addons-336680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:b3:dd", ip: ""} in network mk-addons-336680: {Iface:virbr1 ExpiryTime:2024-03-27 20:58:39 +0000 UTC Type:0 Mac:52:54:00:f6:b3:dd Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-336680 Clientid:01:52:54:00:f6:b3:dd}
	I0327 19:58:48.954811  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined IP address 192.168.39.8 and MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:48.955129  440685 profile.go:143] Saving config to /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/config.json ...
	I0327 19:58:48.963753  440685 start.go:128] duration metric: took 24.451318235s to createHost
	I0327 19:58:48.963806  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHHostname
	I0327 19:58:48.966536  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:48.966855  440685 main.go:141] libmachine: (addons-336680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:b3:dd", ip: ""} in network mk-addons-336680: {Iface:virbr1 ExpiryTime:2024-03-27 20:58:39 +0000 UTC Type:0 Mac:52:54:00:f6:b3:dd Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-336680 Clientid:01:52:54:00:f6:b3:dd}
	I0327 19:58:48.966900  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined IP address 192.168.39.8 and MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:48.967145  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHPort
	I0327 19:58:48.967435  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHKeyPath
	I0327 19:58:48.967741  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHKeyPath
	I0327 19:58:48.967946  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHUsername
	I0327 19:58:48.968152  440685 main.go:141] libmachine: Using SSH client type: native
	I0327 19:58:48.968354  440685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0327 19:58:48.968366  440685 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0327 19:58:49.073486  440685 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711569529.056827043
	
	I0327 19:58:49.073515  440685 fix.go:216] guest clock: 1711569529.056827043
	I0327 19:58:49.073523  440685 fix.go:229] Guest: 2024-03-27 19:58:49.056827043 +0000 UTC Remote: 2024-03-27 19:58:48.963777931 +0000 UTC m=+24.560993415 (delta=93.049112ms)
	I0327 19:58:49.073545  440685 fix.go:200] guest clock delta is within tolerance: 93.049112ms
	I0327 19:58:49.073550  440685 start.go:83] releasing machines lock for "addons-336680", held for 24.561219688s
	I0327 19:58:49.073580  440685 main.go:141] libmachine: (addons-336680) Calling .DriverName
	I0327 19:58:49.073973  440685 main.go:141] libmachine: (addons-336680) Calling .GetIP
	I0327 19:58:49.076855  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:49.077284  440685 main.go:141] libmachine: (addons-336680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:b3:dd", ip: ""} in network mk-addons-336680: {Iface:virbr1 ExpiryTime:2024-03-27 20:58:39 +0000 UTC Type:0 Mac:52:54:00:f6:b3:dd Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-336680 Clientid:01:52:54:00:f6:b3:dd}
	I0327 19:58:49.077313  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined IP address 192.168.39.8 and MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:49.077484  440685 main.go:141] libmachine: (addons-336680) Calling .DriverName
	I0327 19:58:49.078071  440685 main.go:141] libmachine: (addons-336680) Calling .DriverName
	I0327 19:58:49.078259  440685 main.go:141] libmachine: (addons-336680) Calling .DriverName
	I0327 19:58:49.078333  440685 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0327 19:58:49.078385  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHHostname
	I0327 19:58:49.078522  440685 ssh_runner.go:195] Run: cat /version.json
	I0327 19:58:49.078547  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHHostname
	I0327 19:58:49.080792  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:49.081096  440685 main.go:141] libmachine: (addons-336680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:b3:dd", ip: ""} in network mk-addons-336680: {Iface:virbr1 ExpiryTime:2024-03-27 20:58:39 +0000 UTC Type:0 Mac:52:54:00:f6:b3:dd Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-336680 Clientid:01:52:54:00:f6:b3:dd}
	I0327 19:58:49.081123  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined IP address 192.168.39.8 and MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:49.081142  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:49.081226  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHPort
	I0327 19:58:49.081403  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHKeyPath
	I0327 19:58:49.081579  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHUsername
	I0327 19:58:49.081628  440685 main.go:141] libmachine: (addons-336680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:b3:dd", ip: ""} in network mk-addons-336680: {Iface:virbr1 ExpiryTime:2024-03-27 20:58:39 +0000 UTC Type:0 Mac:52:54:00:f6:b3:dd Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-336680 Clientid:01:52:54:00:f6:b3:dd}
	I0327 19:58:49.081671  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined IP address 192.168.39.8 and MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:49.081767  440685 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17735-432634/.minikube/machines/addons-336680/id_rsa Username:docker}
	I0327 19:58:49.081892  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHPort
	I0327 19:58:49.082063  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHKeyPath
	I0327 19:58:49.082231  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHUsername
	I0327 19:58:49.082364  440685 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17735-432634/.minikube/machines/addons-336680/id_rsa Username:docker}
	I0327 19:58:49.161241  440685 ssh_runner.go:195] Run: systemctl --version
	I0327 19:58:49.183055  440685 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0327 19:58:49.190269  440685 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0327 19:58:49.190358  440685 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0327 19:58:49.209200  440685 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0327 19:58:49.209235  440685 start.go:494] detecting cgroup driver to use...
	I0327 19:58:49.209343  440685 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0327 19:58:49.508126  440685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0327 19:58:49.579528  440685 docker.go:217] disabling cri-docker service (if available) ...
	I0327 19:58:49.579635  440685 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0327 19:58:49.597085  440685 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0327 19:58:49.615105  440685 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0327 19:58:49.743174  440685 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0327 19:58:49.908139  440685 docker.go:233] disabling docker service ...
	I0327 19:58:49.908221  440685 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0327 19:58:49.925282  440685 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0327 19:58:49.940994  440685 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0327 19:58:50.084503  440685 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0327 19:58:50.215755  440685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0327 19:58:50.232159  440685 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 19:58:50.253837  440685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0327 19:58:50.266488  440685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0327 19:58:50.279274  440685 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0327 19:58:50.279358  440685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0327 19:58:50.291800  440685 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0327 19:58:50.304735  440685 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0327 19:58:50.317518  440685 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0327 19:58:50.330676  440685 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0327 19:58:50.343407  440685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0327 19:58:50.356384  440685 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0327 19:58:50.369733  440685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0327 19:58:50.382769  440685 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0327 19:58:50.394212  440685 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0327 19:58:50.394297  440685 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0327 19:58:50.410706  440685 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0327 19:58:50.422602  440685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 19:58:50.551861  440685 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0327 19:58:50.585466  440685 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0327 19:58:50.585569  440685 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0327 19:58:50.591300  440685 retry.go:31] will retry after 1.12356568s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0327 19:58:51.715722  440685 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0327 19:58:51.722038  440685 start.go:562] Will wait 60s for crictl version
	I0327 19:58:51.722145  440685 ssh_runner.go:195] Run: which crictl
	I0327 19:58:51.726826  440685 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0327 19:58:51.764058  440685 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.14
	RuntimeApiVersion:  v1
	I0327 19:58:51.764149  440685 ssh_runner.go:195] Run: containerd --version
	I0327 19:58:51.794551  440685 ssh_runner.go:195] Run: containerd --version
	I0327 19:58:51.829676  440685 out.go:177] * Preparing Kubernetes v1.29.3 on containerd 1.7.14 ...
	I0327 19:58:51.831380  440685 main.go:141] libmachine: (addons-336680) Calling .GetIP
	I0327 19:58:51.834354  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:51.834723  440685 main.go:141] libmachine: (addons-336680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:b3:dd", ip: ""} in network mk-addons-336680: {Iface:virbr1 ExpiryTime:2024-03-27 20:58:39 +0000 UTC Type:0 Mac:52:54:00:f6:b3:dd Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-336680 Clientid:01:52:54:00:f6:b3:dd}
	I0327 19:58:51.834764  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined IP address 192.168.39.8 and MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:58:51.834973  440685 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0327 19:58:51.842108  440685 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 19:58:51.856588  440685 kubeadm.go:877] updating cluster {Name:addons-336680 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-336680 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.8 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0327 19:58:51.856708  440685 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0327 19:58:51.856759  440685 ssh_runner.go:195] Run: sudo crictl images --output json
	I0327 19:58:51.894881  440685 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0327 19:58:51.894980  440685 ssh_runner.go:195] Run: which lz4
	I0327 19:58:51.899565  440685 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0327 19:58:51.904392  440685 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0327 19:58:51.904427  440685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-432634/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (402346652 bytes)
	I0327 19:58:53.527552  440685 containerd.go:563] duration metric: took 1.628004029s to copy over tarball
	I0327 19:58:53.527654  440685 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0327 19:58:56.221107  440685 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.693420178s)
	I0327 19:58:56.221139  440685 containerd.go:570] duration metric: took 2.693540717s to extract the tarball
	I0327 19:58:56.221148  440685 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0327 19:58:56.263083  440685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 19:58:56.379414  440685 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0327 19:58:56.407936  440685 ssh_runner.go:195] Run: sudo crictl images --output json
	I0327 19:58:56.453241  440685 retry.go:31] will retry after 126.504183ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-03-27T19:58:56Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0327 19:58:56.580665  440685 ssh_runner.go:195] Run: sudo crictl images --output json
	I0327 19:58:56.625607  440685 containerd.go:627] all images are preloaded for containerd runtime.
	I0327 19:58:56.625640  440685 cache_images.go:84] Images are preloaded, skipping loading
	I0327 19:58:56.625654  440685 kubeadm.go:928] updating node { 192.168.39.8 8443 v1.29.3 containerd true true} ...
	I0327 19:58:56.625795  440685 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-336680 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.8
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:addons-336680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0327 19:58:56.625876  440685 ssh_runner.go:195] Run: sudo crictl info
	I0327 19:58:56.664338  440685 cni.go:84] Creating CNI manager for ""
	I0327 19:58:56.664360  440685 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0327 19:58:56.664374  440685 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0327 19:58:56.664408  440685 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.8 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-336680 NodeName:addons-336680 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.8"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.8 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0327 19:58:56.664635  440685 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.8
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-336680"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.8
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.8"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0327 19:58:56.664760  440685 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0327 19:58:56.677117  440685 binaries.go:44] Found k8s binaries, skipping transfer
	I0327 19:58:56.677205  440685 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0327 19:58:56.689041  440685 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0327 19:58:56.708300  440685 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0327 19:58:56.727692  440685 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0327 19:58:56.747954  440685 ssh_runner.go:195] Run: grep 192.168.39.8	control-plane.minikube.internal$ /etc/hosts
	I0327 19:58:56.752373  440685 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.8	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 19:58:56.767301  440685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 19:58:56.889562  440685 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 19:58:56.912486  440685 certs.go:68] Setting up /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680 for IP: 192.168.39.8
	I0327 19:58:56.912518  440685 certs.go:194] generating shared ca certs ...
	I0327 19:58:56.912536  440685 certs.go:226] acquiring lock for ca certs: {Name:mkdb2136ff45668c74d5d00d3eb7bf2575110fb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 19:58:56.912723  440685 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/17735-432634/.minikube/ca.key
	I0327 19:58:57.418078  440685 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17735-432634/.minikube/ca.crt ...
	I0327 19:58:57.418115  440685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-432634/.minikube/ca.crt: {Name:mk314608f5b12c5c86497d576ce9a4e18c90e3c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 19:58:57.418297  440685 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17735-432634/.minikube/ca.key ...
	I0327 19:58:57.418314  440685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-432634/.minikube/ca.key: {Name:mk38a43fbafa196443ccfa5c32260a82a137dbbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 19:58:57.418383  440685 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17735-432634/.minikube/proxy-client-ca.key
	I0327 19:58:57.602481  440685 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17735-432634/.minikube/proxy-client-ca.crt ...
	I0327 19:58:57.602514  440685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-432634/.minikube/proxy-client-ca.crt: {Name:mkf0b309d594d6883114b03d7a59900873635fb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 19:58:57.602688  440685 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17735-432634/.minikube/proxy-client-ca.key ...
	I0327 19:58:57.602701  440685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-432634/.minikube/proxy-client-ca.key: {Name:mk0ee023224148eea98c5e1a1cf76e7d93d49451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 19:58:57.602776  440685 certs.go:256] generating profile certs ...
	I0327 19:58:57.602836  440685 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.key
	I0327 19:58:57.602851  440685 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.crt with IP's: []
	I0327 19:58:57.764601  440685 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.crt ...
	I0327 19:58:57.764635  440685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.crt: {Name:mkb302bb5a677627955a414d6241e06e36800696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 19:58:57.764815  440685 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.key ...
	I0327 19:58:57.764827  440685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.key: {Name:mk40f03f2dd6818c18950ab199718eff35766ee5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 19:58:57.764897  440685 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/apiserver.key.7ed6e578
	I0327 19:58:57.764917  440685 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/apiserver.crt.7ed6e578 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.8]
	I0327 19:58:57.909486  440685 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/apiserver.crt.7ed6e578 ...
	I0327 19:58:57.909528  440685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/apiserver.crt.7ed6e578: {Name:mk7a5d9121e38c0ce98614c2b77182326bfb5703 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 19:58:57.909704  440685 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/apiserver.key.7ed6e578 ...
	I0327 19:58:57.909716  440685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/apiserver.key.7ed6e578: {Name:mkf8a67b0f62b7cd6c75dc49580017db2fcf80bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 19:58:57.909796  440685 certs.go:381] copying /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/apiserver.crt.7ed6e578 -> /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/apiserver.crt
	I0327 19:58:57.909927  440685 certs.go:385] copying /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/apiserver.key.7ed6e578 -> /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/apiserver.key
	I0327 19:58:57.909980  440685 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/proxy-client.key
	I0327 19:58:57.910007  440685 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/proxy-client.crt with IP's: []
	I0327 19:58:58.044766  440685 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/proxy-client.crt ...
	I0327 19:58:58.044821  440685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/proxy-client.crt: {Name:mkba7e00eb14940049fcd1e71f51bb71591c4303 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 19:58:58.045034  440685 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/proxy-client.key ...
	I0327 19:58:58.045055  440685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/proxy-client.key: {Name:mk839f8ac898ecfc5e7c456a8486c0d1b41dd6e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 19:58:58.045305  440685 certs.go:484] found cert: /home/jenkins/minikube-integration/17735-432634/.minikube/certs/ca-key.pem (1679 bytes)
	I0327 19:58:58.045352  440685 certs.go:484] found cert: /home/jenkins/minikube-integration/17735-432634/.minikube/certs/ca.pem (1082 bytes)
	I0327 19:58:58.045415  440685 certs.go:484] found cert: /home/jenkins/minikube-integration/17735-432634/.minikube/certs/cert.pem (1123 bytes)
	I0327 19:58:58.045442  440685 certs.go:484] found cert: /home/jenkins/minikube-integration/17735-432634/.minikube/certs/key.pem (1675 bytes)
	I0327 19:58:58.046054  440685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-432634/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0327 19:58:58.074107  440685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-432634/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0327 19:58:58.100456  440685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-432634/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0327 19:58:58.126290  440685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-432634/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0327 19:58:58.153140  440685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0327 19:58:58.179072  440685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0327 19:58:58.205225  440685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0327 19:58:58.236548  440685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0327 19:58:58.263724  440685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17735-432634/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0327 19:58:58.289813  440685 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0327 19:58:58.307839  440685 ssh_runner.go:195] Run: openssl version
	I0327 19:58:58.313921  440685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0327 19:58:58.325504  440685 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0327 19:58:58.330440  440685 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 19:58 /usr/share/ca-certificates/minikubeCA.pem
	I0327 19:58:58.330506  440685 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0327 19:58:58.336576  440685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0327 19:58:58.351263  440685 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0327 19:58:58.356605  440685 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0327 19:58:58.356661  440685 kubeadm.go:391] StartCluster: {Name:addons-336680 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-336680 Namespace:defau
lt APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.8 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 19:58:58.356745  440685 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0327 19:58:58.356803  440685 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0327 19:58:58.414197  440685 cri.go:89] found id: ""
	I0327 19:58:58.414298  440685 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0327 19:58:58.428469  440685 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0327 19:58:58.443741  440685 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0327 19:58:58.456038  440685 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0327 19:58:58.456060  440685 kubeadm.go:156] found existing configuration files:
	
	I0327 19:58:58.456106  440685 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0327 19:58:58.469723  440685 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0327 19:58:58.469785  440685 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0327 19:58:58.480398  440685 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0327 19:58:58.490796  440685 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0327 19:58:58.490855  440685 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0327 19:58:58.502151  440685 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0327 19:58:58.512768  440685 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0327 19:58:58.512829  440685 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0327 19:58:58.523831  440685 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0327 19:58:58.534578  440685 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0327 19:58:58.534648  440685 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0327 19:58:58.545887  440685 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0327 19:58:58.602776  440685 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0327 19:58:58.602882  440685 kubeadm.go:309] [preflight] Running pre-flight checks
	I0327 19:58:58.748201  440685 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0327 19:58:58.748352  440685 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0327 19:58:58.748484  440685 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0327 19:58:58.972270  440685 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0327 19:58:58.975553  440685 out.go:204]   - Generating certificates and keys ...
	I0327 19:58:58.975704  440685 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0327 19:58:58.975792  440685 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0327 19:58:59.243876  440685 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0327 19:58:59.340312  440685 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0327 19:58:59.427287  440685 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0327 19:58:59.493090  440685 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0327 19:58:59.654265  440685 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0327 19:58:59.654382  440685 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-336680 localhost] and IPs [192.168.39.8 127.0.0.1 ::1]
	I0327 19:58:59.927531  440685 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0327 19:58:59.927786  440685 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-336680 localhost] and IPs [192.168.39.8 127.0.0.1 ::1]
	I0327 19:59:00.009518  440685 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0327 19:59:00.438771  440685 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0327 19:59:00.628110  440685 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0327 19:59:00.628178  440685 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0327 19:59:00.744084  440685 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0327 19:59:01.167885  440685 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0327 19:59:01.305074  440685 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0327 19:59:01.443707  440685 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0327 19:59:01.545898  440685 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0327 19:59:01.546405  440685 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0327 19:59:01.548823  440685 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0327 19:59:01.550650  440685 out.go:204]   - Booting up control plane ...
	I0327 19:59:01.550790  440685 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0327 19:59:01.550868  440685 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0327 19:59:01.550967  440685 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0327 19:59:01.568208  440685 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0327 19:59:01.568353  440685 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0327 19:59:01.568402  440685 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0327 19:59:01.706538  440685 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0327 19:59:07.708815  440685 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003839 seconds
	I0327 19:59:07.727057  440685 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0327 19:59:07.746448  440685 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0327 19:59:08.278859  440685 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0327 19:59:08.279132  440685 kubeadm.go:309] [mark-control-plane] Marking the node addons-336680 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0327 19:59:08.794268  440685 kubeadm.go:309] [bootstrap-token] Using token: zax9zc.drucns2a3uvcw2ye
	I0327 19:59:08.796349  440685 out.go:204]   - Configuring RBAC rules ...
	I0327 19:59:08.796505  440685 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0327 19:59:08.803785  440685 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0327 19:59:08.812408  440685 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0327 19:59:08.816602  440685 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0327 19:59:08.824425  440685 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0327 19:59:08.828731  440685 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0327 19:59:08.846701  440685 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0327 19:59:09.091994  440685 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0327 19:59:09.219076  440685 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0327 19:59:09.220504  440685 kubeadm.go:309] 
	I0327 19:59:09.220620  440685 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0327 19:59:09.220632  440685 kubeadm.go:309] 
	I0327 19:59:09.220753  440685 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0327 19:59:09.220763  440685 kubeadm.go:309] 
	I0327 19:59:09.220796  440685 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0327 19:59:09.220890  440685 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0327 19:59:09.220967  440685 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0327 19:59:09.221010  440685 kubeadm.go:309] 
	I0327 19:59:09.221110  440685 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0327 19:59:09.221121  440685 kubeadm.go:309] 
	I0327 19:59:09.221209  440685 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0327 19:59:09.221235  440685 kubeadm.go:309] 
	I0327 19:59:09.221301  440685 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0327 19:59:09.221401  440685 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0327 19:59:09.221501  440685 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0327 19:59:09.221511  440685 kubeadm.go:309] 
	I0327 19:59:09.221606  440685 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0327 19:59:09.221710  440685 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0327 19:59:09.221722  440685 kubeadm.go:309] 
	I0327 19:59:09.221837  440685 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token zax9zc.drucns2a3uvcw2ye \
	I0327 19:59:09.221986  440685 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:526806cc923b723f781622a7ea23e85ca71e0de71e19400ef50700d859416524 \
	I0327 19:59:09.222021  440685 kubeadm.go:309] 	--control-plane 
	I0327 19:59:09.222030  440685 kubeadm.go:309] 
	I0327 19:59:09.222176  440685 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0327 19:59:09.222199  440685 kubeadm.go:309] 
	I0327 19:59:09.222356  440685 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token zax9zc.drucns2a3uvcw2ye \
	I0327 19:59:09.222507  440685 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:526806cc923b723f781622a7ea23e85ca71e0de71e19400ef50700d859416524 
	I0327 19:59:09.223591  440685 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0327 19:59:09.223639  440685 cni.go:84] Creating CNI manager for ""
	I0327 19:59:09.223661  440685 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0327 19:59:09.225659  440685 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0327 19:59:09.227303  440685 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0327 19:59:09.262294  440685 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0327 19:59:09.311529  440685 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0327 19:59:09.311658  440685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:59:09.311664  440685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-336680 minikube.k8s.io/updated_at=2024_03_27T19_59_09_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=fd5228225874e763d59e7e8bf88a02e145755a81 minikube.k8s.io/name=addons-336680 minikube.k8s.io/primary=true
	I0327 19:59:09.602675  440685 ops.go:34] apiserver oom_adj: -16
	I0327 19:59:09.603147  440685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:59:10.103538  440685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:59:10.603632  440685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:59:11.103847  440685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:59:11.604144  440685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:59:12.103754  440685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:59:12.603333  440685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:59:13.103499  440685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:59:13.604146  440685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:59:14.103822  440685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:59:14.603928  440685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:59:15.103642  440685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:59:15.604158  440685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:59:16.103749  440685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:59:16.603494  440685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:59:17.103455  440685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:59:17.603634  440685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:59:18.103996  440685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:59:18.603250  440685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:59:19.104052  440685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:59:19.603282  440685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:59:20.103615  440685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:59:20.603449  440685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:59:21.103512  440685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:59:21.603850  440685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:59:22.104243  440685 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:59:22.207074  440685 kubeadm.go:1107] duration metric: took 12.895498471s to wait for elevateKubeSystemPrivileges
	W0327 19:59:22.207121  440685 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0327 19:59:22.207129  440685 kubeadm.go:393] duration metric: took 23.850476505s to StartCluster
	I0327 19:59:22.207152  440685 settings.go:142] acquiring lock: {Name:mk5edb2a5f15635aba67a381c06d486214cff516 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 19:59:22.207277  440685 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17735-432634/kubeconfig
	I0327 19:59:22.207694  440685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-432634/kubeconfig: {Name:mk319b1465a1c498d4f6d47fff0c8f9a50737a5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 19:59:22.207894  440685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0327 19:59:22.207907  440685 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.8 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0327 19:59:22.210055  440685 out.go:177] * Verifying Kubernetes components...
	I0327 19:59:22.207966  440685 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0327 19:59:22.208144  440685 config.go:182] Loaded profile config "addons-336680": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 19:59:22.210172  440685 addons.go:69] Setting default-storageclass=true in profile "addons-336680"
	I0327 19:59:22.211840  440685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 19:59:22.211866  440685 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-336680"
	I0327 19:59:22.210182  440685 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-336680"
	I0327 19:59:22.211973  440685 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-336680"
	I0327 19:59:22.212014  440685 host.go:66] Checking if "addons-336680" exists ...
	I0327 19:59:22.210199  440685 addons.go:69] Setting metrics-server=true in profile "addons-336680"
	I0327 19:59:22.212071  440685 addons.go:234] Setting addon metrics-server=true in "addons-336680"
	I0327 19:59:22.210205  440685 addons.go:69] Setting gcp-auth=true in profile "addons-336680"
	I0327 19:59:22.212187  440685 mustload.go:65] Loading cluster: addons-336680
	I0327 19:59:22.210197  440685 addons.go:69] Setting yakd=true in profile "addons-336680"
	I0327 19:59:22.212263  440685 addons.go:234] Setting addon yakd=true in "addons-336680"
	I0327 19:59:22.212296  440685 host.go:66] Checking if "addons-336680" exists ...
	I0327 19:59:22.210210  440685 addons.go:69] Setting helm-tiller=true in profile "addons-336680"
	I0327 19:59:22.210215  440685 addons.go:69] Setting ingress=true in profile "addons-336680"
	I0327 19:59:22.210216  440685 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-336680"
	I0327 19:59:22.210206  440685 addons.go:69] Setting cloud-spanner=true in profile "addons-336680"
	I0327 19:59:22.210223  440685 addons.go:69] Setting ingress-dns=true in profile "addons-336680"
	I0327 19:59:22.210225  440685 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-336680"
	I0327 19:59:22.210238  440685 addons.go:69] Setting inspektor-gadget=true in profile "addons-336680"
	I0327 19:59:22.210242  440685 addons.go:69] Setting volumesnapshots=true in profile "addons-336680"
	I0327 19:59:22.210243  440685 addons.go:69] Setting registry=true in profile "addons-336680"
	I0327 19:59:22.210242  440685 addons.go:69] Setting storage-provisioner=true in profile "addons-336680"
	I0327 19:59:22.212127  440685 host.go:66] Checking if "addons-336680" exists ...
	I0327 19:59:22.212398  440685 addons.go:234] Setting addon ingress-dns=true in "addons-336680"
	I0327 19:59:22.212428  440685 addons.go:234] Setting addon volumesnapshots=true in "addons-336680"
	I0327 19:59:22.212434  440685 addons.go:234] Setting addon registry=true in "addons-336680"
	I0327 19:59:22.212435  440685 config.go:182] Loaded profile config "addons-336680": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 19:59:22.212437  440685 addons.go:234] Setting addon ingress=true in "addons-336680"
	I0327 19:59:22.212453  440685 host.go:66] Checking if "addons-336680" exists ...
	I0327 19:59:22.212460  440685 host.go:66] Checking if "addons-336680" exists ...
	I0327 19:59:22.212463  440685 host.go:66] Checking if "addons-336680" exists ...
	I0327 19:59:22.212487  440685 host.go:66] Checking if "addons-336680" exists ...
	I0327 19:59:22.212727  440685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 19:59:22.212767  440685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 19:59:22.212777  440685 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-336680"
	I0327 19:59:22.212778  440685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 19:59:22.212800  440685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 19:59:22.212827  440685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 19:59:22.212847  440685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 19:59:22.212869  440685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 19:59:22.212872  440685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 19:59:22.212939  440685 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-336680"
	I0327 19:59:22.212946  440685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 19:59:22.212976  440685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 19:59:22.212980  440685 host.go:66] Checking if "addons-336680" exists ...
	I0327 19:59:22.213049  440685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 19:59:22.213068  440685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 19:59:22.213112  440685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 19:59:22.213141  440685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 19:59:22.213330  440685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 19:59:22.213349  440685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 19:59:22.213431  440685 addons.go:234] Setting addon inspektor-gadget=true in "addons-336680"
	I0327 19:59:22.213466  440685 host.go:66] Checking if "addons-336680" exists ...
	I0327 19:59:22.213504  440685 addons.go:234] Setting addon cloud-spanner=true in "addons-336680"
	I0327 19:59:22.213535  440685 host.go:66] Checking if "addons-336680" exists ...
	I0327 19:59:22.213861  440685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 19:59:22.213865  440685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 19:59:22.213884  440685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 19:59:22.213914  440685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 19:59:22.213932  440685 addons.go:234] Setting addon storage-provisioner=true in "addons-336680"
	I0327 19:59:22.213960  440685 host.go:66] Checking if "addons-336680" exists ...
	I0327 19:59:22.214195  440685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 19:59:22.212402  440685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 19:59:22.214241  440685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 19:59:22.214248  440685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 19:59:22.217399  440685 addons.go:234] Setting addon helm-tiller=true in "addons-336680"
	I0327 19:59:22.217448  440685 host.go:66] Checking if "addons-336680" exists ...
	I0327 19:59:22.224461  440685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 19:59:22.224505  440685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 19:59:22.234173  440685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43041
	I0327 19:59:22.234245  440685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34913
	I0327 19:59:22.236001  440685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37709
	I0327 19:59:22.236002  440685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41475
	I0327 19:59:22.236294  440685 main.go:141] libmachine: () Calling .GetVersion
	I0327 19:59:22.236418  440685 main.go:141] libmachine: () Calling .GetVersion
	I0327 19:59:22.236595  440685 main.go:141] libmachine: () Calling .GetVersion
	I0327 19:59:22.236892  440685 main.go:141] libmachine: Using API Version  1
	I0327 19:59:22.236914  440685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 19:59:22.236999  440685 main.go:141] libmachine: Using API Version  1
	I0327 19:59:22.237016  440685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 19:59:22.237058  440685 main.go:141] libmachine: Using API Version  1
	I0327 19:59:22.237082  440685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 19:59:22.237162  440685 main.go:141] libmachine: () Calling .GetVersion
	I0327 19:59:22.237468  440685 main.go:141] libmachine: () Calling .GetMachineName
	I0327 19:59:22.237498  440685 main.go:141] libmachine: () Calling .GetMachineName
	I0327 19:59:22.237550  440685 main.go:141] libmachine: () Calling .GetMachineName
	I0327 19:59:22.237594  440685 main.go:141] libmachine: Using API Version  1
	I0327 19:59:22.237603  440685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 19:59:22.237996  440685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 19:59:22.238038  440685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 19:59:22.238080  440685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 19:59:22.238120  440685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 19:59:22.238190  440685 main.go:141] libmachine: () Calling .GetMachineName
	I0327 19:59:22.238430  440685 main.go:141] libmachine: (addons-336680) Calling .GetState
	I0327 19:59:22.248165  440685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 19:59:22.248218  440685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 19:59:22.248238  440685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 19:59:22.248288  440685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 19:59:22.248827  440685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 19:59:22.248854  440685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 19:59:22.252348  440685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35987
	I0327 19:59:22.252532  440685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45031
	I0327 19:59:22.253126  440685 main.go:141] libmachine: () Calling .GetVersion
	I0327 19:59:22.253135  440685 main.go:141] libmachine: () Calling .GetVersion
	I0327 19:59:22.253827  440685 main.go:141] libmachine: Using API Version  1
	I0327 19:59:22.253847  440685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 19:59:22.254281  440685 main.go:141] libmachine: () Calling .GetMachineName
	I0327 19:59:22.254420  440685 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-336680"
	I0327 19:59:22.254470  440685 host.go:66] Checking if "addons-336680" exists ...
	I0327 19:59:22.254829  440685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 19:59:22.254840  440685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 19:59:22.254850  440685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 19:59:22.254878  440685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 19:59:22.255365  440685 main.go:141] libmachine: Using API Version  1
	I0327 19:59:22.255388  440685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 19:59:22.255974  440685 main.go:141] libmachine: () Calling .GetMachineName
	I0327 19:59:22.256559  440685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 19:59:22.256600  440685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 19:59:22.271778  440685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37255
	I0327 19:59:22.272425  440685 main.go:141] libmachine: () Calling .GetVersion
	I0327 19:59:22.273019  440685 main.go:141] libmachine: Using API Version  1
	I0327 19:59:22.273050  440685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 19:59:22.273449  440685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44649
	I0327 19:59:22.273480  440685 main.go:141] libmachine: () Calling .GetMachineName
	I0327 19:59:22.273829  440685 main.go:141] libmachine: (addons-336680) Calling .GetState
	I0327 19:59:22.274275  440685 main.go:141] libmachine: () Calling .GetVersion
	I0327 19:59:22.274801  440685 main.go:141] libmachine: Using API Version  1
	I0327 19:59:22.274862  440685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 19:59:22.275322  440685 main.go:141] libmachine: () Calling .GetMachineName
	I0327 19:59:22.275869  440685 main.go:141] libmachine: (addons-336680) Calling .GetState
	I0327 19:59:22.276474  440685 host.go:66] Checking if "addons-336680" exists ...
	I0327 19:59:22.276898  440685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 19:59:22.276939  440685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 19:59:22.279647  440685 addons.go:234] Setting addon default-storageclass=true in "addons-336680"
	I0327 19:59:22.279694  440685 host.go:66] Checking if "addons-336680" exists ...
	I0327 19:59:22.280078  440685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 19:59:22.280122  440685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 19:59:22.284345  440685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35999
	I0327 19:59:22.284944  440685 main.go:141] libmachine: () Calling .GetVersion
	I0327 19:59:22.289907  440685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39579
	I0327 19:59:22.290566  440685 main.go:141] libmachine: () Calling .GetVersion
	I0327 19:59:22.291306  440685 main.go:141] libmachine: Using API Version  1
	I0327 19:59:22.291325  440685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 19:59:22.292048  440685 main.go:141] libmachine: Using API Version  1
	I0327 19:59:22.292068  440685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 19:59:22.292134  440685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39157
	I0327 19:59:22.292247  440685 main.go:141] libmachine: () Calling .GetMachineName
	I0327 19:59:22.292947  440685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 19:59:22.292974  440685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 19:59:22.293182  440685 main.go:141] libmachine: () Calling .GetMachineName
	I0327 19:59:22.293258  440685 main.go:141] libmachine: () Calling .GetVersion
	I0327 19:59:22.293736  440685 main.go:141] libmachine: Using API Version  1
	I0327 19:59:22.293758  440685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 19:59:22.293832  440685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39287
	I0327 19:59:22.294193  440685 main.go:141] libmachine: () Calling .GetVersion
	I0327 19:59:22.294279  440685 main.go:141] libmachine: () Calling .GetMachineName
	I0327 19:59:22.294724  440685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 19:59:22.294759  440685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 19:59:22.294954  440685 main.go:141] libmachine: (addons-336680) Calling .GetState
	I0327 19:59:22.295545  440685 main.go:141] libmachine: Using API Version  1
	I0327 19:59:22.295563  440685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 19:59:22.296027  440685 main.go:141] libmachine: () Calling .GetMachineName
	I0327 19:59:22.296758  440685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 19:59:22.296796  440685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 19:59:22.297047  440685 main.go:141] libmachine: (addons-336680) Calling .DriverName
	I0327 19:59:22.297112  440685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35315
	I0327 19:59:22.297240  440685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41893
	I0327 19:59:22.297667  440685 main.go:141] libmachine: () Calling .GetVersion
	I0327 19:59:22.300137  440685 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0327 19:59:22.298252  440685 main.go:141] libmachine: Using API Version  1
	I0327 19:59:22.298286  440685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45615
	I0327 19:59:22.298922  440685 main.go:141] libmachine: () Calling .GetVersion
	I0327 19:59:22.302037  440685 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0327 19:59:22.302054  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0327 19:59:22.302080  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHHostname
	I0327 19:59:22.302147  440685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 19:59:22.302697  440685 main.go:141] libmachine: () Calling .GetVersion
	I0327 19:59:22.302708  440685 main.go:141] libmachine: () Calling .GetMachineName
	I0327 19:59:22.303369  440685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 19:59:22.303423  440685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 19:59:22.303752  440685 main.go:141] libmachine: Using API Version  1
	I0327 19:59:22.303769  440685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 19:59:22.304130  440685 main.go:141] libmachine: Using API Version  1
	I0327 19:59:22.304149  440685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 19:59:22.304213  440685 main.go:141] libmachine: () Calling .GetMachineName
	I0327 19:59:22.304396  440685 main.go:141] libmachine: (addons-336680) Calling .GetState
	I0327 19:59:22.304795  440685 main.go:141] libmachine: () Calling .GetMachineName
	I0327 19:59:22.305034  440685 main.go:141] libmachine: (addons-336680) Calling .GetState
	I0327 19:59:22.305797  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:59:22.306497  440685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46707
	I0327 19:59:22.306838  440685 main.go:141] libmachine: (addons-336680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:b3:dd", ip: ""} in network mk-addons-336680: {Iface:virbr1 ExpiryTime:2024-03-27 20:58:39 +0000 UTC Type:0 Mac:52:54:00:f6:b3:dd Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-336680 Clientid:01:52:54:00:f6:b3:dd}
	I0327 19:59:22.307077  440685 main.go:141] libmachine: () Calling .GetVersion
	I0327 19:59:22.307349  440685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40773
	I0327 19:59:22.307511  440685 main.go:141] libmachine: (addons-336680) Calling .DriverName
	I0327 19:59:22.307665  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined IP address 192.168.39.8 and MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:59:22.307772  440685 main.go:141] libmachine: Using API Version  1
	I0327 19:59:22.307796  440685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 19:59:22.309364  440685 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0327 19:59:22.307948  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHPort
	I0327 19:59:22.308270  440685 main.go:141] libmachine: () Calling .GetVersion
	I0327 19:59:22.308705  440685 main.go:141] libmachine: (addons-336680) Calling .DriverName
	I0327 19:59:22.308735  440685 main.go:141] libmachine: () Calling .GetMachineName
	I0327 19:59:22.311557  440685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32773
	I0327 19:59:22.311611  440685 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0327 19:59:22.311625  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0327 19:59:22.311648  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHHostname
	I0327 19:59:22.311708  440685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45127
	I0327 19:59:22.313816  440685 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0327 19:59:22.312570  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHKeyPath
	I0327 19:59:22.312620  440685 main.go:141] libmachine: () Calling .GetVersion
	I0327 19:59:22.312670  440685 main.go:141] libmachine: () Calling .GetVersion
	I0327 19:59:22.312942  440685 main.go:141] libmachine: Using API Version  1
	I0327 19:59:22.313042  440685 main.go:141] libmachine: (addons-336680) Calling .GetState
	I0327 19:59:22.315459  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:59:22.315810  440685 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0327 19:59:22.315822  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0327 19:59:22.315845  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHHostname
	I0327 19:59:22.315961  440685 main.go:141] libmachine: (addons-336680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:b3:dd", ip: ""} in network mk-addons-336680: {Iface:virbr1 ExpiryTime:2024-03-27 20:58:39 +0000 UTC Type:0 Mac:52:54:00:f6:b3:dd Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-336680 Clientid:01:52:54:00:f6:b3:dd}
	I0327 19:59:22.315993  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined IP address 192.168.39.8 and MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:59:22.316000  440685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 19:59:22.318086  440685 main.go:141] libmachine: () Calling .GetMachineName
	I0327 19:59:22.318093  440685 main.go:141] libmachine: (addons-336680) Calling .DriverName
	I0327 19:59:22.318153  440685 main.go:141] libmachine: Using API Version  1
	I0327 19:59:22.318172  440685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 19:59:22.318175  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHUsername
	I0327 19:59:22.318229  440685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44981
	I0327 19:59:22.318238  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHPort
	I0327 19:59:22.318245  440685 main.go:141] libmachine: Using API Version  1
	I0327 19:59:22.318258  440685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 19:59:22.318293  440685 main.go:141] libmachine: (addons-336680) Calling .GetState
	I0327 19:59:22.318872  440685 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17735-432634/.minikube/machines/addons-336680/id_rsa Username:docker}
	I0327 19:59:22.318967  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHKeyPath
	I0327 19:59:22.319390  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHUsername
	I0327 19:59:22.319401  440685 main.go:141] libmachine: () Calling .GetMachineName
	I0327 19:59:22.319464  440685 main.go:141] libmachine: () Calling .GetMachineName
	I0327 19:59:22.319708  440685 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17735-432634/.minikube/machines/addons-336680/id_rsa Username:docker}
	I0327 19:59:22.319739  440685 main.go:141] libmachine: (addons-336680) Calling .DriverName
	I0327 19:59:22.322502  440685 out.go:177]   - Using image docker.io/registry:2.8.3
	I0327 19:59:22.321192  440685 main.go:141] libmachine: () Calling .GetVersion
	I0327 19:59:22.321351  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:59:22.321879  440685 main.go:141] libmachine: (addons-336680) Calling .DriverName
	I0327 19:59:22.321959  440685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 19:59:22.322065  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHPort
	I0327 19:59:22.324020  440685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42101
	I0327 19:59:22.325795  440685 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0327 19:59:22.327636  440685 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0327 19:59:22.327655  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0327 19:59:22.327675  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHHostname
	I0327 19:59:22.325899  440685 main.go:141] libmachine: () Calling .GetVersion
	I0327 19:59:22.324657  440685 main.go:141] libmachine: (addons-336680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:b3:dd", ip: ""} in network mk-addons-336680: {Iface:virbr1 ExpiryTime:2024-03-27 20:58:39 +0000 UTC Type:0 Mac:52:54:00:f6:b3:dd Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-336680 Clientid:01:52:54:00:f6:b3:dd}
	I0327 19:59:22.329415  440685 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0327 19:59:22.324918  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHKeyPath
	I0327 19:59:22.325243  440685 main.go:141] libmachine: Using API Version  1
	I0327 19:59:22.324628  440685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 19:59:22.326771  440685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42359
	I0327 19:59:22.327745  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined IP address 192.168.39.8 and MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:59:22.328231  440685 main.go:141] libmachine: Using API Version  1
	I0327 19:59:22.328315  440685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41945
	I0327 19:59:22.333262  440685 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0327 19:59:22.331771  440685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 19:59:22.331794  440685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 19:59:22.332024  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHUsername
	I0327 19:59:22.332438  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:59:22.332499  440685 main.go:141] libmachine: () Calling .GetVersion
	I0327 19:59:22.333111  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHPort
	I0327 19:59:22.333149  440685 main.go:141] libmachine: () Calling .GetVersion
	I0327 19:59:22.334494  440685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33381
	I0327 19:59:22.336409  440685 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0327 19:59:22.334926  440685 main.go:141] libmachine: (addons-336680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:b3:dd", ip: ""} in network mk-addons-336680: {Iface:virbr1 ExpiryTime:2024-03-27 20:58:39 +0000 UTC Type:0 Mac:52:54:00:f6:b3:dd Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-336680 Clientid:01:52:54:00:f6:b3:dd}
	I0327 19:59:22.336449  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined IP address 192.168.39.8 and MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:59:22.335896  440685 main.go:141] libmachine: () Calling .GetMachineName
	I0327 19:59:22.335918  440685 main.go:141] libmachine: () Calling .GetMachineName
	I0327 19:59:22.335945  440685 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17735-432634/.minikube/machines/addons-336680/id_rsa Username:docker}
	I0327 19:59:22.337985  440685 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0327 19:59:22.338017  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0327 19:59:22.338036  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHHostname
	I0327 19:59:22.335991  440685 main.go:141] libmachine: Using API Version  1
	I0327 19:59:22.338188  440685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 19:59:22.336222  440685 main.go:141] libmachine: Using API Version  1
	I0327 19:59:22.338227  440685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 19:59:22.335859  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHKeyPath
	I0327 19:59:22.336555  440685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42707
	I0327 19:59:22.337117  440685 main.go:141] libmachine: () Calling .GetVersion
	I0327 19:59:22.338529  440685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 19:59:22.338568  440685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 19:59:22.338668  440685 main.go:141] libmachine: () Calling .GetMachineName
	I0327 19:59:22.338739  440685 main.go:141] libmachine: () Calling .GetVersion
	I0327 19:59:22.338899  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHUsername
	I0327 19:59:22.338983  440685 main.go:141] libmachine: () Calling .GetMachineName
	I0327 19:59:22.339062  440685 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17735-432634/.minikube/machines/addons-336680/id_rsa Username:docker}
	I0327 19:59:22.339106  440685 main.go:141] libmachine: (addons-336680) Calling .GetState
	I0327 19:59:22.339123  440685 main.go:141] libmachine: Using API Version  1
	I0327 19:59:22.339134  440685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 19:59:22.339264  440685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 19:59:22.339302  440685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 19:59:22.339466  440685 main.go:141] libmachine: Using API Version  1
	I0327 19:59:22.339492  440685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 19:59:22.339504  440685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 19:59:22.339551  440685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 19:59:22.340470  440685 main.go:141] libmachine: () Calling .GetMachineName
	I0327 19:59:22.340495  440685 main.go:141] libmachine: () Calling .GetMachineName
	I0327 19:59:22.340726  440685 main.go:141] libmachine: (addons-336680) Calling .GetState
	I0327 19:59:22.340986  440685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42223
	I0327 19:59:22.341678  440685 main.go:141] libmachine: () Calling .GetVersion
	I0327 19:59:22.341829  440685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 19:59:22.341895  440685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 19:59:22.341963  440685 main.go:141] libmachine: (addons-336680) Calling .DriverName
	I0327 19:59:22.342459  440685 main.go:141] libmachine: Using API Version  1
	I0327 19:59:22.342471  440685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 19:59:22.342549  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:59:22.344829  440685 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0327 19:59:22.342808  440685 main.go:141] libmachine: (addons-336680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:b3:dd", ip: ""} in network mk-addons-336680: {Iface:virbr1 ExpiryTime:2024-03-27 20:58:39 +0000 UTC Type:0 Mac:52:54:00:f6:b3:dd Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-336680 Clientid:01:52:54:00:f6:b3:dd}
	I0327 19:59:22.342994  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHPort
	I0327 19:59:22.343029  440685 main.go:141] libmachine: () Calling .GetMachineName
	I0327 19:59:22.343680  440685 main.go:141] libmachine: (addons-336680) Calling .DriverName
	I0327 19:59:22.346507  440685 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0327 19:59:22.346521  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0327 19:59:22.346542  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHHostname
	I0327 19:59:22.346594  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined IP address 192.168.39.8 and MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:59:22.348162  440685 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0327 19:59:22.346984  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHKeyPath
	I0327 19:59:22.347034  440685 main.go:141] libmachine: (addons-336680) Calling .GetState
	I0327 19:59:22.349675  440685 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0327 19:59:22.349696  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0327 19:59:22.349716  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHHostname
	I0327 19:59:22.350357  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:59:22.350389  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHUsername
	I0327 19:59:22.350630  440685 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17735-432634/.minikube/machines/addons-336680/id_rsa Username:docker}
	I0327 19:59:22.351548  440685 main.go:141] libmachine: (addons-336680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:b3:dd", ip: ""} in network mk-addons-336680: {Iface:virbr1 ExpiryTime:2024-03-27 20:58:39 +0000 UTC Type:0 Mac:52:54:00:f6:b3:dd Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-336680 Clientid:01:52:54:00:f6:b3:dd}
	I0327 19:59:22.351567  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined IP address 192.168.39.8 and MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:59:22.351624  440685 main.go:141] libmachine: (addons-336680) Calling .DriverName
	I0327 19:59:22.353646  440685 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0327 19:59:22.352339  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHPort
	I0327 19:59:22.353340  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:59:22.353999  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHPort
	I0327 19:59:22.356503  440685 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0327 19:59:22.355190  440685 main.go:141] libmachine: (addons-336680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:b3:dd", ip: ""} in network mk-addons-336680: {Iface:virbr1 ExpiryTime:2024-03-27 20:58:39 +0000 UTC Type:0 Mac:52:54:00:f6:b3:dd Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-336680 Clientid:01:52:54:00:f6:b3:dd}
	I0327 19:59:22.355225  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHKeyPath
	I0327 19:59:22.355342  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHKeyPath
	I0327 19:59:22.357987  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined IP address 192.168.39.8 and MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:59:22.359511  440685 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0327 19:59:22.358186  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHUsername
	I0327 19:59:22.358360  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHUsername
	I0327 19:59:22.363286  440685 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0327 19:59:22.361774  440685 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17735-432634/.minikube/machines/addons-336680/id_rsa Username:docker}
	I0327 19:59:22.361957  440685 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17735-432634/.minikube/machines/addons-336680/id_rsa Username:docker}
	I0327 19:59:22.362986  440685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40899
	I0327 19:59:22.363941  440685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42891
	I0327 19:59:22.366019  440685 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0327 19:59:22.365239  440685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34383
	I0327 19:59:22.365421  440685 main.go:141] libmachine: () Calling .GetVersion
	I0327 19:59:22.365493  440685 main.go:141] libmachine: () Calling .GetVersion
	I0327 19:59:22.366525  440685 main.go:141] libmachine: () Calling .GetVersion
	I0327 19:59:22.366732  440685 main.go:141] libmachine: Using API Version  1
	I0327 19:59:22.366879  440685 main.go:141] libmachine: Using API Version  1
	I0327 19:59:22.367817  440685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41909
	I0327 19:59:22.367841  440685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42187
	I0327 19:59:22.367866  440685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36159
	I0327 19:59:22.368076  440685 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0327 19:59:22.368180  440685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 19:59:22.368255  440685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 19:59:22.368603  440685 main.go:141] libmachine: () Calling .GetVersion
	I0327 19:59:22.368741  440685 main.go:141] libmachine: Using API Version  1
	I0327 19:59:22.369648  440685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 19:59:22.369692  440685 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0327 19:59:22.368762  440685 main.go:141] libmachine: () Calling .GetVersion
	I0327 19:59:22.368785  440685 main.go:141] libmachine: () Calling .GetVersion
	I0327 19:59:22.370089  440685 main.go:141] libmachine: () Calling .GetMachineName
	I0327 19:59:22.370111  440685 main.go:141] libmachine: () Calling .GetMachineName
	I0327 19:59:22.370129  440685 main.go:141] libmachine: () Calling .GetMachineName
	I0327 19:59:22.370255  440685 main.go:141] libmachine: Using API Version  1
	I0327 19:59:22.371542  440685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 19:59:22.373189  440685 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0327 19:59:22.371922  440685 main.go:141] libmachine: () Calling .GetMachineName
	I0327 19:59:22.371957  440685 main.go:141] libmachine: (addons-336680) Calling .GetState
	I0327 19:59:22.372000  440685 main.go:141] libmachine: (addons-336680) Calling .GetState
	I0327 19:59:22.372000  440685 main.go:141] libmachine: (addons-336680) Calling .GetState
	I0327 19:59:22.372217  440685 main.go:141] libmachine: Using API Version  1
	I0327 19:59:22.372365  440685 main.go:141] libmachine: Using API Version  1
	I0327 19:59:22.374623  440685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 19:59:22.374732  440685 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0327 19:59:22.374745  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0327 19:59:22.374762  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHHostname
	I0327 19:59:22.374876  440685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 19:59:22.374999  440685 main.go:141] libmachine: (addons-336680) Calling .GetState
	I0327 19:59:22.375058  440685 main.go:141] libmachine: () Calling .GetMachineName
	I0327 19:59:22.375607  440685 main.go:141] libmachine: (addons-336680) Calling .GetState
	I0327 19:59:22.375677  440685 main.go:141] libmachine: () Calling .GetMachineName
	I0327 19:59:22.375868  440685 main.go:141] libmachine: (addons-336680) Calling .GetState
	I0327 19:59:22.377887  440685 main.go:141] libmachine: (addons-336680) Calling .DriverName
	I0327 19:59:22.380149  440685 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0327 19:59:22.378330  440685 main.go:141] libmachine: (addons-336680) Calling .DriverName
	I0327 19:59:22.378753  440685 main.go:141] libmachine: (addons-336680) Calling .DriverName
	I0327 19:59:22.379079  440685 main.go:141] libmachine: (addons-336680) Calling .DriverName
	I0327 19:59:22.379213  440685 main.go:141] libmachine: (addons-336680) Calling .DriverName
	I0327 19:59:22.380119  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:59:22.380845  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHPort
	I0327 19:59:22.381227  440685 main.go:141] libmachine: (addons-336680) Calling .DriverName
	I0327 19:59:22.383097  440685 out.go:177]   - Using image docker.io/busybox:stable
	I0327 19:59:22.381836  440685 main.go:141] libmachine: (addons-336680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:b3:dd", ip: ""} in network mk-addons-336680: {Iface:virbr1 ExpiryTime:2024-03-27 20:58:39 +0000 UTC Type:0 Mac:52:54:00:f6:b3:dd Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-336680 Clientid:01:52:54:00:f6:b3:dd}
	I0327 19:59:22.382103  440685 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0327 19:59:22.382133  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHKeyPath
	I0327 19:59:22.384543  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined IP address 192.168.39.8 and MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:59:22.384567  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0327 19:59:22.384662  440685 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0327 19:59:22.385918  440685 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0327 19:59:22.386173  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHUsername
	I0327 19:59:22.387346  440685 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0327 19:59:22.389145  440685 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0327 19:59:22.387498  440685 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 19:59:22.387516  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0327 19:59:22.387530  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHHostname
	I0327 19:59:22.387426  440685 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0327 19:59:22.387770  440685 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17735-432634/.minikube/machines/addons-336680/id_rsa Username:docker}
	I0327 19:59:22.389207  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0327 19:59:22.389211  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHHostname
	I0327 19:59:22.390847  440685 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0327 19:59:22.392541  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHHostname
	I0327 19:59:22.392569  440685 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 19:59:22.393989  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0327 19:59:22.394078  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHHostname
	I0327 19:59:22.394180  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:59:22.394202  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0327 19:59:22.394214  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHHostname
	I0327 19:59:22.394658  440685 main.go:141] libmachine: (addons-336680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:b3:dd", ip: ""} in network mk-addons-336680: {Iface:virbr1 ExpiryTime:2024-03-27 20:58:39 +0000 UTC Type:0 Mac:52:54:00:f6:b3:dd Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-336680 Clientid:01:52:54:00:f6:b3:dd}
	I0327 19:59:22.394675  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined IP address 192.168.39.8 and MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:59:22.394805  440685 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0327 19:59:22.394818  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0327 19:59:22.394846  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHHostname
	I0327 19:59:22.394873  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHPort
	I0327 19:59:22.395066  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHKeyPath
	I0327 19:59:22.395219  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHUsername
	I0327 19:59:22.395441  440685 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17735-432634/.minikube/machines/addons-336680/id_rsa Username:docker}
	I0327 19:59:22.397759  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:59:22.398635  440685 main.go:141] libmachine: (addons-336680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:b3:dd", ip: ""} in network mk-addons-336680: {Iface:virbr1 ExpiryTime:2024-03-27 20:58:39 +0000 UTC Type:0 Mac:52:54:00:f6:b3:dd Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-336680 Clientid:01:52:54:00:f6:b3:dd}
	I0327 19:59:22.398658  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined IP address 192.168.39.8 and MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:59:22.398905  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHPort
	I0327 19:59:22.399142  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHKeyPath
	I0327 19:59:22.399272  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHUsername
	I0327 19:59:22.400057  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:59:22.400100  440685 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17735-432634/.minikube/machines/addons-336680/id_rsa Username:docker}
	I0327 19:59:22.400394  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:59:22.400781  440685 main.go:141] libmachine: (addons-336680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:b3:dd", ip: ""} in network mk-addons-336680: {Iface:virbr1 ExpiryTime:2024-03-27 20:58:39 +0000 UTC Type:0 Mac:52:54:00:f6:b3:dd Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-336680 Clientid:01:52:54:00:f6:b3:dd}
	I0327 19:59:22.400802  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined IP address 192.168.39.8 and MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:59:22.400828  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:59:22.400845  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:59:22.400879  440685 main.go:141] libmachine: (addons-336680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:b3:dd", ip: ""} in network mk-addons-336680: {Iface:virbr1 ExpiryTime:2024-03-27 20:58:39 +0000 UTC Type:0 Mac:52:54:00:f6:b3:dd Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-336680 Clientid:01:52:54:00:f6:b3:dd}
	I0327 19:59:22.400890  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined IP address 192.168.39.8 and MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:59:22.401053  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHPort
	I0327 19:59:22.401199  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHPort
	I0327 19:59:22.401236  440685 main.go:141] libmachine: (addons-336680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:b3:dd", ip: ""} in network mk-addons-336680: {Iface:virbr1 ExpiryTime:2024-03-27 20:58:39 +0000 UTC Type:0 Mac:52:54:00:f6:b3:dd Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-336680 Clientid:01:52:54:00:f6:b3:dd}
	I0327 19:59:22.401267  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined IP address 192.168.39.8 and MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:59:22.401251  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHKeyPath
	I0327 19:59:22.401393  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHKeyPath
	I0327 19:59:22.401406  440685 main.go:141] libmachine: (addons-336680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:b3:dd", ip: ""} in network mk-addons-336680: {Iface:virbr1 ExpiryTime:2024-03-27 20:58:39 +0000 UTC Type:0 Mac:52:54:00:f6:b3:dd Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-336680 Clientid:01:52:54:00:f6:b3:dd}
	I0327 19:59:22.401420  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined IP address 192.168.39.8 and MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:59:22.401422  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHUsername
	I0327 19:59:22.401622  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHPort
	I0327 19:59:22.401630  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHPort
	I0327 19:59:22.401639  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHUsername
	I0327 19:59:22.401633  440685 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17735-432634/.minikube/machines/addons-336680/id_rsa Username:docker}
	I0327 19:59:22.401774  440685 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17735-432634/.minikube/machines/addons-336680/id_rsa Username:docker}
	I0327 19:59:22.401884  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHKeyPath
	I0327 19:59:22.401910  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHKeyPath
	I0327 19:59:22.402059  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHUsername
	I0327 19:59:22.402152  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHUsername
	I0327 19:59:22.402189  440685 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17735-432634/.minikube/machines/addons-336680/id_rsa Username:docker}
	I0327 19:59:22.402552  440685 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17735-432634/.minikube/machines/addons-336680/id_rsa Username:docker}
	W0327 19:59:22.411468  440685 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:32796->192.168.39.8:22: read: connection reset by peer
	I0327 19:59:22.411511  440685 retry.go:31] will retry after 318.233245ms: ssh: handshake failed: read tcp 192.168.39.1:32796->192.168.39.8:22: read: connection reset by peer
	W0327 19:59:22.411607  440685 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:32812->192.168.39.8:22: read: connection reset by peer
	I0327 19:59:22.411621  440685 retry.go:31] will retry after 207.562207ms: ssh: handshake failed: read tcp 192.168.39.1:32812->192.168.39.8:22: read: connection reset by peer
	W0327 19:59:22.411856  440685 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:32816->192.168.39.8:22: read: connection reset by peer
	I0327 19:59:22.411887  440685 retry.go:31] will retry after 179.611828ms: ssh: handshake failed: read tcp 192.168.39.1:32816->192.168.39.8:22: read: connection reset by peer
	W0327 19:59:22.414153  440685 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:32832->192.168.39.8:22: read: connection reset by peer
	I0327 19:59:22.414185  440685 retry.go:31] will retry after 340.715697ms: ssh: handshake failed: read tcp 192.168.39.1:32832->192.168.39.8:22: read: connection reset by peer
	I0327 19:59:22.837023  440685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0327 19:59:22.856180  440685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0327 19:59:23.122886  440685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 19:59:23.210166  440685 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0327 19:59:23.210193  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0327 19:59:23.274771  440685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0327 19:59:23.276683  440685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0327 19:59:23.306140  440685 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0327 19:59:23.306169  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0327 19:59:23.332269  440685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0327 19:59:23.352540  440685 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0327 19:59:23.352562  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0327 19:59:23.404055  440685 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0327 19:59:23.404083  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0327 19:59:23.419756  440685 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.211828648s)
	I0327 19:59:23.419878  440685 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.208006349s)
	I0327 19:59:23.419908  440685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0327 19:59:23.419964  440685 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 19:59:23.466698  440685 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0327 19:59:23.466730  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0327 19:59:23.489341  440685 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0327 19:59:23.489372  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0327 19:59:23.532143  440685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0327 19:59:23.668437  440685 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0327 19:59:23.668482  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0327 19:59:23.719941  440685 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0327 19:59:23.719971  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0327 19:59:23.720442  440685 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0327 19:59:23.720462  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0327 19:59:23.758056  440685 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0327 19:59:23.758080  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0327 19:59:23.766682  440685 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0327 19:59:23.766713  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0327 19:59:23.803165  440685 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0327 19:59:23.803205  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0327 19:59:23.860417  440685 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0327 19:59:23.860443  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0327 19:59:23.964120  440685 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0327 19:59:23.964153  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0327 19:59:24.080473  440685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0327 19:59:24.099010  440685 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0327 19:59:24.099039  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0327 19:59:24.134193  440685 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0327 19:59:24.134228  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0327 19:59:24.138268  440685 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0327 19:59:24.138289  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0327 19:59:24.155557  440685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0327 19:59:24.180827  440685 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0327 19:59:24.180852  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0327 19:59:24.212997  440685 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0327 19:59:24.213029  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0327 19:59:24.243619  440685 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0327 19:59:24.243649  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0327 19:59:24.291141  440685 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0327 19:59:24.291248  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0327 19:59:24.347446  440685 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0327 19:59:24.347483  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0327 19:59:24.377266  440685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0327 19:59:24.466950  440685 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0327 19:59:24.466994  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0327 19:59:24.561049  440685 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0327 19:59:24.561086  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0327 19:59:24.599509  440685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0327 19:59:24.620806  440685 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0327 19:59:24.620833  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0327 19:59:24.689350  440685 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0327 19:59:24.689394  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0327 19:59:25.007725  440685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0327 19:59:25.024125  440685 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0327 19:59:25.024155  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0327 19:59:25.216364  440685 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0327 19:59:25.216393  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0327 19:59:25.336703  440685 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0327 19:59:25.336746  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0327 19:59:25.445248  440685 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0327 19:59:25.445275  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0327 19:59:25.600123  440685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0327 19:59:25.715721  440685 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0327 19:59:25.715746  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0327 19:59:25.985971  440685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.148887583s)
	I0327 19:59:25.986046  440685 main.go:141] libmachine: Making call to close driver server
	I0327 19:59:25.986059  440685 main.go:141] libmachine: (addons-336680) Calling .Close
	I0327 19:59:25.986421  440685 main.go:141] libmachine: (addons-336680) DBG | Closing plugin on server side
	I0327 19:59:25.986492  440685 main.go:141] libmachine: Successfully made call to close driver server
	I0327 19:59:25.986511  440685 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 19:59:25.986529  440685 main.go:141] libmachine: Making call to close driver server
	I0327 19:59:25.986541  440685 main.go:141] libmachine: (addons-336680) Calling .Close
	I0327 19:59:25.986794  440685 main.go:141] libmachine: Successfully made call to close driver server
	I0327 19:59:25.986817  440685 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 19:59:26.109436  440685 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0327 19:59:26.109465  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0327 19:59:26.672030  440685 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0327 19:59:26.672066  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0327 19:59:27.117296  440685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0327 19:59:29.134383  440685 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0327 19:59:29.134438  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHHostname
	I0327 19:59:29.138150  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:59:29.138689  440685 main.go:141] libmachine: (addons-336680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:b3:dd", ip: ""} in network mk-addons-336680: {Iface:virbr1 ExpiryTime:2024-03-27 20:58:39 +0000 UTC Type:0 Mac:52:54:00:f6:b3:dd Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-336680 Clientid:01:52:54:00:f6:b3:dd}
	I0327 19:59:29.138721  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined IP address 192.168.39.8 and MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:59:29.138937  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHPort
	I0327 19:59:29.139182  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHKeyPath
	I0327 19:59:29.139354  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHUsername
	I0327 19:59:29.139515  440685 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17735-432634/.minikube/machines/addons-336680/id_rsa Username:docker}
	I0327 19:59:29.454499  440685 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0327 19:59:29.679019  440685 addons.go:234] Setting addon gcp-auth=true in "addons-336680"
	I0327 19:59:29.679096  440685 host.go:66] Checking if "addons-336680" exists ...
	I0327 19:59:29.679427  440685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 19:59:29.679466  440685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 19:59:29.715723  440685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39769
	I0327 19:59:29.716135  440685 main.go:141] libmachine: () Calling .GetVersion
	I0327 19:59:29.716756  440685 main.go:141] libmachine: Using API Version  1
	I0327 19:59:29.716781  440685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 19:59:29.717136  440685 main.go:141] libmachine: () Calling .GetMachineName
	I0327 19:59:29.717621  440685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 19:59:29.717661  440685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 19:59:29.734134  440685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33515
	I0327 19:59:29.734560  440685 main.go:141] libmachine: () Calling .GetVersion
	I0327 19:59:29.735114  440685 main.go:141] libmachine: Using API Version  1
	I0327 19:59:29.735147  440685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 19:59:29.735607  440685 main.go:141] libmachine: () Calling .GetMachineName
	I0327 19:59:29.735856  440685 main.go:141] libmachine: (addons-336680) Calling .GetState
	I0327 19:59:29.737781  440685 main.go:141] libmachine: (addons-336680) Calling .DriverName
	I0327 19:59:29.738051  440685 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0327 19:59:29.738079  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHHostname
	I0327 19:59:29.741112  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:59:29.741627  440685 main.go:141] libmachine: (addons-336680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:b3:dd", ip: ""} in network mk-addons-336680: {Iface:virbr1 ExpiryTime:2024-03-27 20:58:39 +0000 UTC Type:0 Mac:52:54:00:f6:b3:dd Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-336680 Clientid:01:52:54:00:f6:b3:dd}
	I0327 19:59:29.741654  440685 main.go:141] libmachine: (addons-336680) DBG | domain addons-336680 has defined IP address 192.168.39.8 and MAC address 52:54:00:f6:b3:dd in network mk-addons-336680
	I0327 19:59:29.741861  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHPort
	I0327 19:59:29.742091  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHKeyPath
	I0327 19:59:29.742281  440685 main.go:141] libmachine: (addons-336680) Calling .GetSSHUsername
	I0327 19:59:29.742464  440685 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17735-432634/.minikube/machines/addons-336680/id_rsa Username:docker}
	I0327 19:59:32.300216  440685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.443988013s)
	I0327 19:59:32.300297  440685 main.go:141] libmachine: Making call to close driver server
	I0327 19:59:32.300311  440685 main.go:141] libmachine: (addons-336680) Calling .Close
	I0327 19:59:32.300313  440685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.177391068s)
	I0327 19:59:32.300362  440685 main.go:141] libmachine: Making call to close driver server
	I0327 19:59:32.300381  440685 main.go:141] libmachine: (addons-336680) Calling .Close
	I0327 19:59:32.300386  440685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.023683865s)
	I0327 19:59:32.300405  440685 main.go:141] libmachine: Making call to close driver server
	I0327 19:59:32.300420  440685 main.go:141] libmachine: (addons-336680) Calling .Close
	I0327 19:59:32.300491  440685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.025656658s)
	I0327 19:59:32.300529  440685 main.go:141] libmachine: Making call to close driver server
	I0327 19:59:32.300541  440685 main.go:141] libmachine: (addons-336680) Calling .Close
	I0327 19:59:32.300628  440685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.968333598s)
	I0327 19:59:32.300645  440685 main.go:141] libmachine: Making call to close driver server
	I0327 19:59:32.300653  440685 main.go:141] libmachine: (addons-336680) Calling .Close
	I0327 19:59:32.300709  440685 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.880726661s)
	I0327 19:59:32.300842  440685 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.880913888s)
	I0327 19:59:32.300862  440685 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0327 19:59:32.301744  440685 node_ready.go:35] waiting up to 6m0s for node "addons-336680" to be "Ready" ...
	I0327 19:59:32.301919  440685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.769746932s)
	I0327 19:59:32.301941  440685 main.go:141] libmachine: Making call to close driver server
	I0327 19:59:32.301950  440685 main.go:141] libmachine: (addons-336680) Calling .Close
	I0327 19:59:32.301997  440685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.221493757s)
	I0327 19:59:32.302024  440685 main.go:141] libmachine: Making call to close driver server
	I0327 19:59:32.302031  440685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.146443652s)
	I0327 19:59:32.302037  440685 main.go:141] libmachine: (addons-336680) Calling .Close
	I0327 19:59:32.302047  440685 main.go:141] libmachine: Making call to close driver server
	I0327 19:59:32.302057  440685 main.go:141] libmachine: (addons-336680) Calling .Close
	I0327 19:59:32.302129  440685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.924832171s)
	I0327 19:59:32.302146  440685 main.go:141] libmachine: Making call to close driver server
	I0327 19:59:32.302155  440685 main.go:141] libmachine: (addons-336680) Calling .Close
	I0327 19:59:32.302251  440685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.702712184s)
	I0327 19:59:32.302266  440685 main.go:141] libmachine: Making call to close driver server
	I0327 19:59:32.302274  440685 main.go:141] libmachine: (addons-336680) Calling .Close
	I0327 19:59:32.302441  440685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.294672963s)
	W0327 19:59:32.302484  440685 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0327 19:59:32.302507  440685 retry.go:31] will retry after 286.138812ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0327 19:59:32.302574  440685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.702414086s)
	I0327 19:59:32.302632  440685 main.go:141] libmachine: Making call to close driver server
	I0327 19:59:32.302642  440685 main.go:141] libmachine: (addons-336680) Calling .Close
	I0327 19:59:32.302675  440685 main.go:141] libmachine: (addons-336680) DBG | Closing plugin on server side
	I0327 19:59:32.302698  440685 main.go:141] libmachine: (addons-336680) DBG | Closing plugin on server side
	I0327 19:59:32.302712  440685 main.go:141] libmachine: Successfully made call to close driver server
	I0327 19:59:32.302715  440685 main.go:141] libmachine: (addons-336680) DBG | Closing plugin on server side
	I0327 19:59:32.302721  440685 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 19:59:32.302732  440685 main.go:141] libmachine: (addons-336680) DBG | Closing plugin on server side
	I0327 19:59:32.302753  440685 main.go:141] libmachine: Making call to close driver server
	I0327 19:59:32.302761  440685 main.go:141] libmachine: (addons-336680) Calling .Close
	I0327 19:59:32.302851  440685 main.go:141] libmachine: (addons-336680) DBG | Closing plugin on server side
	I0327 19:59:32.302863  440685 main.go:141] libmachine: Successfully made call to close driver server
	I0327 19:59:32.302871  440685 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 19:59:32.302875  440685 main.go:141] libmachine: Successfully made call to close driver server
	I0327 19:59:32.302879  440685 main.go:141] libmachine: Making call to close driver server
	I0327 19:59:32.302882  440685 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 19:59:32.302891  440685 main.go:141] libmachine: Making call to close driver server
	I0327 19:59:32.302897  440685 main.go:141] libmachine: (addons-336680) Calling .Close
	I0327 19:59:32.302915  440685 main.go:141] libmachine: Successfully made call to close driver server
	I0327 19:59:32.302925  440685 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 19:59:32.302934  440685 main.go:141] libmachine: Making call to close driver server
	I0327 19:59:32.302938  440685 main.go:141] libmachine: (addons-336680) DBG | Closing plugin on server side
	I0327 19:59:32.302940  440685 main.go:141] libmachine: (addons-336680) Calling .Close
	I0327 19:59:32.302960  440685 main.go:141] libmachine: Successfully made call to close driver server
	I0327 19:59:32.302968  440685 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 19:59:32.302977  440685 main.go:141] libmachine: Making call to close driver server
	I0327 19:59:32.302984  440685 main.go:141] libmachine: (addons-336680) Calling .Close
	I0327 19:59:32.303030  440685 main.go:141] libmachine: Successfully made call to close driver server
	I0327 19:59:32.303037  440685 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 19:59:32.303044  440685 main.go:141] libmachine: Making call to close driver server
	I0327 19:59:32.303051  440685 main.go:141] libmachine: (addons-336680) Calling .Close
	I0327 19:59:32.303417  440685 main.go:141] libmachine: (addons-336680) Calling .Close
	I0327 19:59:32.303494  440685 main.go:141] libmachine: Successfully made call to close driver server
	I0327 19:59:32.303503  440685 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 19:59:32.303519  440685 main.go:141] libmachine: Making call to close driver server
	I0327 19:59:32.303527  440685 main.go:141] libmachine: (addons-336680) Calling .Close
	I0327 19:59:32.303802  440685 main.go:141] libmachine: (addons-336680) DBG | Closing plugin on server side
	I0327 19:59:32.303830  440685 main.go:141] libmachine: Successfully made call to close driver server
	I0327 19:59:32.303837  440685 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 19:59:32.303853  440685 main.go:141] libmachine: Making call to close driver server
	I0327 19:59:32.303860  440685 main.go:141] libmachine: (addons-336680) Calling .Close
	I0327 19:59:32.303903  440685 main.go:141] libmachine: (addons-336680) DBG | Closing plugin on server side
	I0327 19:59:32.303943  440685 main.go:141] libmachine: Successfully made call to close driver server
	I0327 19:59:32.303950  440685 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 19:59:32.304104  440685 main.go:141] libmachine: (addons-336680) DBG | Closing plugin on server side
	I0327 19:59:32.304119  440685 main.go:141] libmachine: (addons-336680) DBG | Closing plugin on server side
	I0327 19:59:32.304131  440685 main.go:141] libmachine: Successfully made call to close driver server
	I0327 19:59:32.304138  440685 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 19:59:32.304142  440685 main.go:141] libmachine: Successfully made call to close driver server
	I0327 19:59:32.304156  440685 main.go:141] libmachine: Making call to close driver server
	I0327 19:59:32.304158  440685 main.go:141] libmachine: (addons-336680) DBG | Closing plugin on server side
	I0327 19:59:32.304163  440685 main.go:141] libmachine: (addons-336680) Calling .Close
	I0327 19:59:32.304178  440685 main.go:141] libmachine: Successfully made call to close driver server
	I0327 19:59:32.304184  440685 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 19:59:32.304419  440685 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 19:59:32.304470  440685 main.go:141] libmachine: (addons-336680) DBG | Closing plugin on server side
	I0327 19:59:32.304517  440685 main.go:141] libmachine: (addons-336680) DBG | Closing plugin on server side
	I0327 19:59:32.304537  440685 main.go:141] libmachine: Successfully made call to close driver server
	I0327 19:59:32.304543  440685 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 19:59:32.304588  440685 main.go:141] libmachine: (addons-336680) DBG | Closing plugin on server side
	I0327 19:59:32.304610  440685 main.go:141] libmachine: Successfully made call to close driver server
	I0327 19:59:32.304616  440685 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 19:59:32.304656  440685 main.go:141] libmachine: (addons-336680) DBG | Closing plugin on server side
	I0327 19:59:32.304678  440685 main.go:141] libmachine: Successfully made call to close driver server
	I0327 19:59:32.304684  440685 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 19:59:32.304691  440685 main.go:141] libmachine: Making call to close driver server
	I0327 19:59:32.304698  440685 main.go:141] libmachine: (addons-336680) Calling .Close
	I0327 19:59:32.304739  440685 main.go:141] libmachine: Successfully made call to close driver server
	I0327 19:59:32.304745  440685 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 19:59:32.304756  440685 addons.go:470] Verifying addon metrics-server=true in "addons-336680"
	I0327 19:59:32.305030  440685 main.go:141] libmachine: (addons-336680) DBG | Closing plugin on server side
	I0327 19:59:32.305057  440685 main.go:141] libmachine: (addons-336680) DBG | Closing plugin on server side
	I0327 19:59:32.305088  440685 main.go:141] libmachine: Successfully made call to close driver server
	I0327 19:59:32.305096  440685 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 19:59:32.307769  440685 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-336680 service yakd-dashboard -n yakd-dashboard
	
	I0327 19:59:32.305385  440685 main.go:141] libmachine: (addons-336680) DBG | Closing plugin on server side
	I0327 19:59:32.305417  440685 main.go:141] libmachine: Successfully made call to close driver server
	I0327 19:59:32.305450  440685 main.go:141] libmachine: Successfully made call to close driver server
	I0327 19:59:32.307078  440685 main.go:141] libmachine: Successfully made call to close driver server
	I0327 19:59:32.307098  440685 main.go:141] libmachine: (addons-336680) DBG | Closing plugin on server side
	I0327 19:59:32.309819  440685 main.go:141] libmachine: Successfully made call to close driver server
	I0327 19:59:32.309907  440685 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 19:59:32.310134  440685 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 19:59:32.310146  440685 addons.go:470] Verifying addon registry=true in "addons-336680"
	I0327 19:59:32.311831  440685 out.go:177] * Verifying registry addon...
	I0327 19:59:32.310294  440685 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 19:59:32.309843  440685 main.go:141] libmachine: (addons-336680) DBG | Closing plugin on server side
	I0327 19:59:32.310336  440685 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 19:59:32.313407  440685 main.go:141] libmachine: Making call to close driver server
	I0327 19:59:32.313419  440685 main.go:141] libmachine: (addons-336680) Calling .Close
	I0327 19:59:32.313680  440685 main.go:141] libmachine: Successfully made call to close driver server
	I0327 19:59:32.313686  440685 main.go:141] libmachine: (addons-336680) DBG | Closing plugin on server side
	I0327 19:59:32.313694  440685 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 19:59:32.313717  440685 addons.go:470] Verifying addon ingress=true in "addons-336680"
	I0327 19:59:32.315369  440685 out.go:177] * Verifying ingress addon...
	I0327 19:59:32.314533  440685 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0327 19:59:32.317653  440685 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0327 19:59:32.324573  440685 node_ready.go:49] node "addons-336680" has status "Ready":"True"
	I0327 19:59:32.324611  440685 node_ready.go:38] duration metric: took 22.839221ms for node "addons-336680" to be "Ready" ...
	I0327 19:59:32.324625  440685 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0327 19:59:32.337536  440685 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0327 19:59:32.337559  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:32.344906  440685 main.go:141] libmachine: Making call to close driver server
	I0327 19:59:32.344925  440685 main.go:141] libmachine: (addons-336680) Calling .Close
	I0327 19:59:32.345178  440685 main.go:141] libmachine: Successfully made call to close driver server
	I0327 19:59:32.345198  440685 main.go:141] libmachine: Making call to close connection to plugin binary
	W0327 19:59:32.345313  440685 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0327 19:59:32.352075  440685 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0327 19:59:32.352099  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:32.385961  440685 main.go:141] libmachine: Making call to close driver server
	I0327 19:59:32.385982  440685 main.go:141] libmachine: (addons-336680) Calling .Close
	I0327 19:59:32.386422  440685 main.go:141] libmachine: (addons-336680) DBG | Closing plugin on server side
	I0327 19:59:32.386465  440685 main.go:141] libmachine: Successfully made call to close driver server
	I0327 19:59:32.386477  440685 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 19:59:32.400333  440685 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-7279d" in "kube-system" namespace to be "Ready" ...
	I0327 19:59:32.588872  440685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0327 19:59:32.805913  440685 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-336680" context rescaled to 1 replicas
	I0327 19:59:32.834339  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:32.835828  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:33.321930  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:33.324428  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:33.856965  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:33.858128  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:34.217044  440685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.099671807s)
	I0327 19:59:34.217067  440685 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.478996938s)
	I0327 19:59:34.217108  440685 main.go:141] libmachine: Making call to close driver server
	I0327 19:59:34.217124  440685 main.go:141] libmachine: (addons-336680) Calling .Close
	I0327 19:59:34.219376  440685 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0327 19:59:34.217497  440685 main.go:141] libmachine: Successfully made call to close driver server
	I0327 19:59:34.217533  440685 main.go:141] libmachine: (addons-336680) DBG | Closing plugin on server side
	I0327 19:59:34.220751  440685 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 19:59:34.220763  440685 main.go:141] libmachine: Making call to close driver server
	I0327 19:59:34.220773  440685 main.go:141] libmachine: (addons-336680) Calling .Close
	I0327 19:59:34.222319  440685 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0327 19:59:34.221084  440685 main.go:141] libmachine: (addons-336680) DBG | Closing plugin on server side
	I0327 19:59:34.221084  440685 main.go:141] libmachine: Successfully made call to close driver server
	I0327 19:59:34.223773  440685 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 19:59:34.223793  440685 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-336680"
	I0327 19:59:34.223835  440685 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0327 19:59:34.223861  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0327 19:59:34.225452  440685 out.go:177] * Verifying csi-hostpath-driver addon...
	I0327 19:59:34.227562  440685 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0327 19:59:34.252055  440685 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0327 19:59:34.252080  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:34.324965  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:34.330804  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:34.344258  440685 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0327 19:59:34.344290  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0327 19:59:34.407652  440685 pod_ready.go:102] pod "coredns-76f75df574-7279d" in "kube-system" namespace has status "Ready":"False"
	I0327 19:59:34.431308  440685 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0327 19:59:34.431330  440685 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0327 19:59:34.533562  440685 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0327 19:59:34.756161  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:34.805417  440685 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.216479748s)
	I0327 19:59:34.805499  440685 main.go:141] libmachine: Making call to close driver server
	I0327 19:59:34.805520  440685 main.go:141] libmachine: (addons-336680) Calling .Close
	I0327 19:59:34.805869  440685 main.go:141] libmachine: (addons-336680) DBG | Closing plugin on server side
	I0327 19:59:34.805928  440685 main.go:141] libmachine: Successfully made call to close driver server
	I0327 19:59:34.805945  440685 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 19:59:34.805957  440685 main.go:141] libmachine: Making call to close driver server
	I0327 19:59:34.805964  440685 main.go:141] libmachine: (addons-336680) Calling .Close
	I0327 19:59:34.806391  440685 main.go:141] libmachine: Successfully made call to close driver server
	I0327 19:59:34.806411  440685 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 19:59:34.834082  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:34.834589  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:35.233739  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:35.328703  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:35.329517  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:35.488880  440685 main.go:141] libmachine: Making call to close driver server
	I0327 19:59:35.488902  440685 main.go:141] libmachine: (addons-336680) Calling .Close
	I0327 19:59:35.489265  440685 main.go:141] libmachine: Successfully made call to close driver server
	I0327 19:59:35.489282  440685 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 19:59:35.489290  440685 main.go:141] libmachine: Making call to close driver server
	I0327 19:59:35.489297  440685 main.go:141] libmachine: (addons-336680) Calling .Close
	I0327 19:59:35.489533  440685 main.go:141] libmachine: Successfully made call to close driver server
	I0327 19:59:35.489553  440685 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 19:59:35.491039  440685 addons.go:470] Verifying addon gcp-auth=true in "addons-336680"
	I0327 19:59:35.492863  440685 out.go:177] * Verifying gcp-auth addon...
	I0327 19:59:35.495056  440685 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0327 19:59:35.521080  440685 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0327 19:59:35.521103  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:35.734297  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:35.825176  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:35.825890  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:35.999287  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:36.234687  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:36.323377  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:36.325496  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:36.499767  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:36.734466  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:36.830112  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:36.831350  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:36.907469  440685 pod_ready.go:102] pod "coredns-76f75df574-7279d" in "kube-system" namespace has status "Ready":"False"
	I0327 19:59:36.999289  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:37.234119  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:37.324372  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:37.324730  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:37.499121  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:37.733902  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:37.825398  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:37.830233  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:37.999853  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:38.241588  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:38.323269  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:38.323723  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:38.499522  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:38.735296  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:38.830202  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:38.834174  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:38.907858  440685 pod_ready.go:102] pod "coredns-76f75df574-7279d" in "kube-system" namespace has status "Ready":"False"
	I0327 19:59:39.000116  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:39.237564  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:39.324019  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:39.325870  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:39.500797  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:39.734434  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:39.824363  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:39.825096  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:39.999341  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:40.235375  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:40.324774  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:40.324960  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:40.502476  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:40.739714  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:40.822918  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:40.822945  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:40.909032  440685 pod_ready.go:102] pod "coredns-76f75df574-7279d" in "kube-system" namespace has status "Ready":"False"
	I0327 19:59:41.000874  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:41.234829  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:41.323945  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:41.324267  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:41.499707  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:41.735005  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:41.828368  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:41.829745  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:42.001086  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:42.236136  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:42.325438  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:42.325440  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:42.499175  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:42.733703  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:42.823693  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:42.824057  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:43.002865  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:43.234230  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:43.322820  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:43.323626  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:43.408002  440685 pod_ready.go:102] pod "coredns-76f75df574-7279d" in "kube-system" namespace has status "Ready":"False"
	I0327 19:59:43.499916  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:43.738742  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:43.823924  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:43.825207  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:44.016420  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:44.234080  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:44.322504  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:44.329350  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:44.499439  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:44.997335  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:45.000171  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:45.002360  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:45.002402  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:45.234688  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:45.321538  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:45.323031  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:45.500087  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:45.734104  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:45.837906  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:45.838208  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:45.907930  440685 pod_ready.go:102] pod "coredns-76f75df574-7279d" in "kube-system" namespace has status "Ready":"False"
	I0327 19:59:46.001388  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:46.235040  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:46.323781  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:46.326416  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:46.500025  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:46.735389  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:46.823056  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:46.823155  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:47.000239  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:47.234285  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:47.323790  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:47.326443  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:47.500115  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:47.734115  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:47.830799  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:47.833235  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:47.915691  440685 pod_ready.go:102] pod "coredns-76f75df574-7279d" in "kube-system" namespace has status "Ready":"False"
	I0327 19:59:48.001022  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:48.235993  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:48.327426  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:48.327875  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:48.508228  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:48.734396  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:48.824834  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:48.825362  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:48.999786  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:49.244818  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:49.325872  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:49.326869  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:49.499707  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:49.752852  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:49.823357  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:49.823609  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:50.000238  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:50.234950  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:50.324576  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:50.325140  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:50.408761  440685 pod_ready.go:102] pod "coredns-76f75df574-7279d" in "kube-system" namespace has status "Ready":"False"
	I0327 19:59:50.499446  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:50.734726  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:50.836314  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:50.838738  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:51.000406  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:51.236223  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:51.331021  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:51.331299  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:51.499190  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:51.736362  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:51.825809  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:51.827819  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:52.001958  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:52.233519  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:52.329511  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:52.330380  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:52.519324  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:52.738278  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:52.825821  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:52.827099  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:52.911427  440685 pod_ready.go:102] pod "coredns-76f75df574-7279d" in "kube-system" namespace has status "Ready":"False"
	I0327 19:59:53.004578  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:53.428327  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:53.429041  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:53.429331  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:53.500914  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:53.735130  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:53.824342  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:53.824411  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:53.999054  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:54.233468  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:54.323309  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:54.325558  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:54.499760  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:54.740055  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:54.824283  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:54.824567  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:54.999607  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:55.234382  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:55.324481  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:55.325200  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:55.409872  440685 pod_ready.go:102] pod "coredns-76f75df574-7279d" in "kube-system" namespace has status "Ready":"False"
	I0327 19:59:55.499906  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:55.736034  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:55.823229  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:55.826399  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:55.999323  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:56.239285  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:56.322417  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:56.323044  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:56.499559  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:56.750480  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:56.858939  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:56.865509  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:57.000896  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:57.234472  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:57.325500  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:57.328951  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:57.500306  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:57.734479  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:57.828691  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:57.829468  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:57.907394  440685 pod_ready.go:102] pod "coredns-76f75df574-7279d" in "kube-system" namespace has status "Ready":"False"
	I0327 19:59:58.003169  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:58.234984  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:58.322901  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:58.325035  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:58.498952  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:58.734313  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:58.823820  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:58.824182  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:59.001544  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:59.238179  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:59.322103  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:59:59.326922  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:59.500794  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:59:59.734464  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:59:59.824615  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:59:59.826331  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 20:00:00.000198  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:00.240494  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:00.328703  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 20:00:00.328976  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:00.428628  440685 pod_ready.go:102] pod "coredns-76f75df574-7279d" in "kube-system" namespace has status "Ready":"False"
	I0327 20:00:00.502033  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:00.733746  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:00.823405  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:00.824792  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 20:00:01.001730  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:01.234958  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:01.323904  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 20:00:01.324002  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:01.500258  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:01.747368  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:02.248753  440685 kapi.go:107] duration metric: took 29.934219344s to wait for kubernetes.io/minikube-addons=registry ...
	I0327 20:00:02.249227  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:02.249550  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:02.259875  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:02.323181  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:02.418889  440685 pod_ready.go:92] pod "coredns-76f75df574-7279d" in "kube-system" namespace has status "Ready":"True"
	I0327 20:00:02.418914  440685 pod_ready.go:81] duration metric: took 30.01855325s for pod "coredns-76f75df574-7279d" in "kube-system" namespace to be "Ready" ...
	I0327 20:00:02.418925  440685 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-wdvm5" in "kube-system" namespace to be "Ready" ...
	I0327 20:00:02.422437  440685 pod_ready.go:97] error getting pod "coredns-76f75df574-wdvm5" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-wdvm5" not found
	I0327 20:00:02.422464  440685 pod_ready.go:81] duration metric: took 3.532767ms for pod "coredns-76f75df574-wdvm5" in "kube-system" namespace to be "Ready" ...
	E0327 20:00:02.422474  440685 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-wdvm5" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-wdvm5" not found
	I0327 20:00:02.422480  440685 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-336680" in "kube-system" namespace to be "Ready" ...
	I0327 20:00:02.439824  440685 pod_ready.go:92] pod "etcd-addons-336680" in "kube-system" namespace has status "Ready":"True"
	I0327 20:00:02.439852  440685 pod_ready.go:81] duration metric: took 17.364487ms for pod "etcd-addons-336680" in "kube-system" namespace to be "Ready" ...
	I0327 20:00:02.439863  440685 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-336680" in "kube-system" namespace to be "Ready" ...
	I0327 20:00:02.447851  440685 pod_ready.go:92] pod "kube-apiserver-addons-336680" in "kube-system" namespace has status "Ready":"True"
	I0327 20:00:02.447871  440685 pod_ready.go:81] duration metric: took 8.0023ms for pod "kube-apiserver-addons-336680" in "kube-system" namespace to be "Ready" ...
	I0327 20:00:02.447881  440685 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-336680" in "kube-system" namespace to be "Ready" ...
	I0327 20:00:02.454720  440685 pod_ready.go:92] pod "kube-controller-manager-addons-336680" in "kube-system" namespace has status "Ready":"True"
	I0327 20:00:02.454743  440685 pod_ready.go:81] duration metric: took 6.855395ms for pod "kube-controller-manager-addons-336680" in "kube-system" namespace to be "Ready" ...
	I0327 20:00:02.454754  440685 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-khg7b" in "kube-system" namespace to be "Ready" ...
	I0327 20:00:02.499997  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:02.655276  440685 pod_ready.go:92] pod "kube-proxy-khg7b" in "kube-system" namespace has status "Ready":"True"
	I0327 20:00:02.655301  440685 pod_ready.go:81] duration metric: took 200.540581ms for pod "kube-proxy-khg7b" in "kube-system" namespace to be "Ready" ...
	I0327 20:00:02.655310  440685 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-336680" in "kube-system" namespace to be "Ready" ...
	I0327 20:00:02.734564  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:02.822604  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:03.000256  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:03.056387  440685 pod_ready.go:92] pod "kube-scheduler-addons-336680" in "kube-system" namespace has status "Ready":"True"
	I0327 20:00:03.056424  440685 pod_ready.go:81] duration metric: took 401.105943ms for pod "kube-scheduler-addons-336680" in "kube-system" namespace to be "Ready" ...
	I0327 20:00:03.056439  440685 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-69cf46c98-rvmcc" in "kube-system" namespace to be "Ready" ...
	I0327 20:00:03.235510  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:03.326703  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:03.502581  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:03.734361  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:03.823319  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:04.003676  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:04.233812  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:04.256528  440685 pod_ready.go:92] pod "metrics-server-69cf46c98-rvmcc" in "kube-system" namespace has status "Ready":"True"
	I0327 20:00:04.256570  440685 pod_ready.go:81] duration metric: took 1.200121615s for pod "metrics-server-69cf46c98-rvmcc" in "kube-system" namespace to be "Ready" ...
	I0327 20:00:04.256586  440685 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-4gdb7" in "kube-system" namespace to be "Ready" ...
	I0327 20:00:04.323415  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:04.501013  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:04.656095  440685 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-4gdb7" in "kube-system" namespace has status "Ready":"True"
	I0327 20:00:04.656122  440685 pod_ready.go:81] duration metric: took 399.528394ms for pod "nvidia-device-plugin-daemonset-4gdb7" in "kube-system" namespace to be "Ready" ...
	I0327 20:00:04.656142  440685 pod_ready.go:38] duration metric: took 32.331499356s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0327 20:00:04.656160  440685 api_server.go:52] waiting for apiserver process to appear ...
	I0327 20:00:04.656216  440685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 20:00:04.677902  440685 api_server.go:72] duration metric: took 42.46996432s to wait for apiserver process to appear ...
	I0327 20:00:04.677939  440685 api_server.go:88] waiting for apiserver healthz status ...
	I0327 20:00:04.677968  440685 api_server.go:253] Checking apiserver healthz at https://192.168.39.8:8443/healthz ...
	I0327 20:00:04.684869  440685 api_server.go:279] https://192.168.39.8:8443/healthz returned 200:
	ok
	I0327 20:00:04.686024  440685 api_server.go:141] control plane version: v1.29.3
	I0327 20:00:04.686047  440685 api_server.go:131] duration metric: took 8.100643ms to wait for apiserver health ...
	I0327 20:00:04.686055  440685 system_pods.go:43] waiting for kube-system pods to appear ...
	I0327 20:00:04.733925  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:04.823847  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:04.863732  440685 system_pods.go:59] 18 kube-system pods found
	I0327 20:00:04.863775  440685 system_pods.go:61] "coredns-76f75df574-7279d" [e6ae9c86-bbfd-428f-a0eb-2137b1239a04] Running
	I0327 20:00:04.863783  440685 system_pods.go:61] "csi-hostpath-attacher-0" [f0b5bcae-0051-4256-b59a-3f07d46902b7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0327 20:00:04.863790  440685 system_pods.go:61] "csi-hostpath-resizer-0" [906d8f41-03ad-41c7-be77-92b8e496bd01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0327 20:00:04.863798  440685 system_pods.go:61] "csi-hostpathplugin-95vb4" [462e4849-9e7d-4b52-b791-e87d81d01ca2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0327 20:00:04.863802  440685 system_pods.go:61] "etcd-addons-336680" [6b0effc8-f811-426e-a470-21b82c1c7139] Running
	I0327 20:00:04.863806  440685 system_pods.go:61] "kube-apiserver-addons-336680" [30ff1f47-dfec-4fa5-8fa1-dd7696b68891] Running
	I0327 20:00:04.863809  440685 system_pods.go:61] "kube-controller-manager-addons-336680" [8252a2bc-b06a-4044-8622-f395de2e1298] Running
	I0327 20:00:04.863813  440685 system_pods.go:61] "kube-ingress-dns-minikube" [fa2b7220-3ce8-491a-a870-a6742b2f81dd] Running
	I0327 20:00:04.863816  440685 system_pods.go:61] "kube-proxy-khg7b" [0d0716b1-420f-44dc-a474-662329b88530] Running
	I0327 20:00:04.863821  440685 system_pods.go:61] "kube-scheduler-addons-336680" [8a8ef4a3-a3a2-43ea-8160-cf202d1831b4] Running
	I0327 20:00:04.863826  440685 system_pods.go:61] "metrics-server-69cf46c98-rvmcc" [83cb6bf0-cd18-40bf-b1fb-75e6451a7a28] Running
	I0327 20:00:04.863829  440685 system_pods.go:61] "nvidia-device-plugin-daemonset-4gdb7" [71741d27-b1da-43af-84bb-3908f2dce37f] Running
	I0327 20:00:04.863832  440685 system_pods.go:61] "registry-proxy-k9g6t" [23200993-9350-48c7-b5bc-0919a62fe952] Running
	I0327 20:00:04.863835  440685 system_pods.go:61] "registry-zn5wr" [abe37412-af14-4e8a-8d12-cbfdf01a40f5] Running
	I0327 20:00:04.863841  440685 system_pods.go:61] "snapshot-controller-58dbcc7b99-8jqqr" [64b911cb-ada4-44c9-be2f-c78a6d0afcc5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0327 20:00:04.863849  440685 system_pods.go:61] "snapshot-controller-58dbcc7b99-zp7fp" [1111c2f1-7660-465b-b70e-5c828c6bef8d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0327 20:00:04.863856  440685 system_pods.go:61] "storage-provisioner" [8ed305c0-954a-4a5e-b459-84f657554201] Running
	I0327 20:00:04.863860  440685 system_pods.go:61] "tiller-deploy-7b677967b9-lqfd5" [60633554-e5ef-4c06-b9d7-2d18c2890139] Running
	I0327 20:00:04.863867  440685 system_pods.go:74] duration metric: took 177.806232ms to wait for pod list to return data ...
	I0327 20:00:04.863882  440685 default_sa.go:34] waiting for default service account to be created ...
	I0327 20:00:05.001261  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:05.057176  440685 default_sa.go:45] found service account: "default"
	I0327 20:00:05.057208  440685 default_sa.go:55] duration metric: took 193.318749ms for default service account to be created ...
	I0327 20:00:05.057219  440685 system_pods.go:116] waiting for k8s-apps to be running ...
	I0327 20:00:05.234853  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:05.273854  440685 system_pods.go:86] 18 kube-system pods found
	I0327 20:00:05.273904  440685 system_pods.go:89] "coredns-76f75df574-7279d" [e6ae9c86-bbfd-428f-a0eb-2137b1239a04] Running
	I0327 20:00:05.273922  440685 system_pods.go:89] "csi-hostpath-attacher-0" [f0b5bcae-0051-4256-b59a-3f07d46902b7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0327 20:00:05.273935  440685 system_pods.go:89] "csi-hostpath-resizer-0" [906d8f41-03ad-41c7-be77-92b8e496bd01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0327 20:00:05.273951  440685 system_pods.go:89] "csi-hostpathplugin-95vb4" [462e4849-9e7d-4b52-b791-e87d81d01ca2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0327 20:00:05.273959  440685 system_pods.go:89] "etcd-addons-336680" [6b0effc8-f811-426e-a470-21b82c1c7139] Running
	I0327 20:00:05.273972  440685 system_pods.go:89] "kube-apiserver-addons-336680" [30ff1f47-dfec-4fa5-8fa1-dd7696b68891] Running
	I0327 20:00:05.273980  440685 system_pods.go:89] "kube-controller-manager-addons-336680" [8252a2bc-b06a-4044-8622-f395de2e1298] Running
	I0327 20:00:05.273994  440685 system_pods.go:89] "kube-ingress-dns-minikube" [fa2b7220-3ce8-491a-a870-a6742b2f81dd] Running
	I0327 20:00:05.274001  440685 system_pods.go:89] "kube-proxy-khg7b" [0d0716b1-420f-44dc-a474-662329b88530] Running
	I0327 20:00:05.274009  440685 system_pods.go:89] "kube-scheduler-addons-336680" [8a8ef4a3-a3a2-43ea-8160-cf202d1831b4] Running
	I0327 20:00:05.274018  440685 system_pods.go:89] "metrics-server-69cf46c98-rvmcc" [83cb6bf0-cd18-40bf-b1fb-75e6451a7a28] Running
	I0327 20:00:05.274026  440685 system_pods.go:89] "nvidia-device-plugin-daemonset-4gdb7" [71741d27-b1da-43af-84bb-3908f2dce37f] Running
	I0327 20:00:05.274043  440685 system_pods.go:89] "registry-proxy-k9g6t" [23200993-9350-48c7-b5bc-0919a62fe952] Running
	I0327 20:00:05.274050  440685 system_pods.go:89] "registry-zn5wr" [abe37412-af14-4e8a-8d12-cbfdf01a40f5] Running
	I0327 20:00:05.274063  440685 system_pods.go:89] "snapshot-controller-58dbcc7b99-8jqqr" [64b911cb-ada4-44c9-be2f-c78a6d0afcc5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0327 20:00:05.274078  440685 system_pods.go:89] "snapshot-controller-58dbcc7b99-zp7fp" [1111c2f1-7660-465b-b70e-5c828c6bef8d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0327 20:00:05.274091  440685 system_pods.go:89] "storage-provisioner" [8ed305c0-954a-4a5e-b459-84f657554201] Running
	I0327 20:00:05.274100  440685 system_pods.go:89] "tiller-deploy-7b677967b9-lqfd5" [60633554-e5ef-4c06-b9d7-2d18c2890139] Running
	I0327 20:00:05.274113  440685 system_pods.go:126] duration metric: took 216.886486ms to wait for k8s-apps to be running ...
	I0327 20:00:05.274126  440685 system_svc.go:44] waiting for kubelet service to be running ....
	I0327 20:00:05.274199  440685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 20:00:05.296623  440685 system_svc.go:56] duration metric: took 22.484261ms WaitForService to wait for kubelet
	I0327 20:00:05.296663  440685 kubeadm.go:576] duration metric: took 43.088729618s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 20:00:05.296692  440685 node_conditions.go:102] verifying NodePressure condition ...
	I0327 20:00:05.323724  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:05.456438  440685 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0327 20:00:05.456482  440685 node_conditions.go:123] node cpu capacity is 2
	I0327 20:00:05.456520  440685 node_conditions.go:105] duration metric: took 159.821974ms to run NodePressure ...
	I0327 20:00:05.456533  440685 start.go:240] waiting for startup goroutines ...
	I0327 20:00:05.499840  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:05.733502  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:05.822132  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:05.999755  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:06.232933  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:06.323100  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:06.499475  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:06.734219  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:06.826560  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:07.003037  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:07.244191  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:07.323619  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:07.499189  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:07.735274  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:07.822866  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:07.999863  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:08.233641  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:08.322378  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:08.501189  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:08.738876  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:08.823403  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:08.999659  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:09.244605  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:09.322762  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:09.499175  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:09.939516  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:09.940269  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:09.999866  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:10.237657  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:10.325206  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:10.500101  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:10.739978  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:10.842149  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:10.999986  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:11.234826  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:11.322964  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:11.499767  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:11.733905  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:11.831361  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:11.999928  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:12.233916  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:12.322432  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:12.503196  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:12.733596  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:12.834517  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:13.001712  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:13.233435  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:13.322983  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:13.500035  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:13.735409  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:13.823358  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:13.999533  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:14.234832  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:14.322934  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:14.500834  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:14.733872  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:14.830547  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:15.250587  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:15.251743  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:15.323372  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:15.500875  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:15.734963  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:15.823876  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:16.000574  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:16.236645  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:16.323731  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:16.499461  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:16.734571  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:16.823279  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:16.999984  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:17.238219  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:17.323209  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:17.505070  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:17.735396  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:17.823989  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:18.009306  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:18.234985  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:18.358224  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:18.500139  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:18.741872  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:18.828450  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:19.003384  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:19.234674  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:19.322212  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:19.500460  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:19.744886  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:19.823661  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:19.999801  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:20.416913  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:20.418216  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:20.499508  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:20.734022  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:20.823088  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:20.999313  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:21.235839  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:21.322834  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:21.499289  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:21.736423  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:21.824172  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:21.999474  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:22.245340  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:22.325782  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:22.501729  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:22.734079  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:22.833845  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:23.008098  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:23.235164  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:23.324049  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:23.499831  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:23.739308  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:23.823164  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:24.001909  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:24.234098  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:24.322850  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:24.500245  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:24.733821  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:24.823212  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:24.999174  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:25.235241  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:25.327977  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:25.499785  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:25.736292  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:25.823158  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:25.999514  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:26.238048  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:26.339108  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:26.503689  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:26.734667  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:26.823009  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:27.001002  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:27.235639  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:27.322829  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:27.499993  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:27.734951  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:27.823901  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:28.000411  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:28.235925  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:28.323333  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:28.500887  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:28.738777  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 20:00:28.823683  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:29.003785  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:29.234407  440685 kapi.go:107] duration metric: took 55.0068396s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0327 20:00:29.323611  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:29.499519  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:29.822910  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:29.999549  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:30.324097  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:30.501255  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:30.823560  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:31.000818  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:31.323932  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:31.500096  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:31.824195  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:31.999924  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:32.323866  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:32.499920  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:32.831935  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:32.999990  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:33.324508  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:33.499931  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:33.823381  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:34.000416  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:34.323612  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:34.499670  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:34.823893  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:34.999411  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:35.324120  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:35.499782  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:35.823255  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:35.999967  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:36.323082  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:36.499624  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:36.823704  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:36.999427  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:37.323784  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:37.500024  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:37.834493  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:38.000332  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:38.323002  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:38.500318  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:38.823171  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:38.999024  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:39.323000  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:39.499694  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:39.823490  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:40.000017  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:40.323775  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:40.502142  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:40.823395  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:41.000714  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:41.327313  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:41.502987  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:41.822522  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:42.000281  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:42.323000  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:42.500647  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:42.823314  440685 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 20:00:43.000753  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:43.323488  440685 kapi.go:107] duration metric: took 1m11.005830101s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0327 20:00:43.500397  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:44.000446  440685 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 20:00:44.500340  440685 kapi.go:107] duration metric: took 1m9.005281083s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0327 20:00:44.502201  440685 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-336680 cluster.
	I0327 20:00:44.503621  440685 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0327 20:00:44.504869  440685 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0327 20:00:44.506428  440685 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, helm-tiller, nvidia-device-plugin, storage-provisioner, metrics-server, yakd, inspektor-gadget, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0327 20:00:44.507808  440685 addons.go:505] duration metric: took 1m22.299846954s for enable addons: enabled=[cloud-spanner ingress-dns helm-tiller nvidia-device-plugin storage-provisioner metrics-server yakd inspektor-gadget storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0327 20:00:44.507869  440685 start.go:245] waiting for cluster config update ...
	I0327 20:00:44.507899  440685 start.go:254] writing updated cluster config ...
	I0327 20:00:44.508219  440685 ssh_runner.go:195] Run: rm -f paused
	I0327 20:00:44.566952  440685 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0327 20:00:44.568644  440685 out.go:177] * Done! kubectl is now configured to use "addons-336680" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	e848f265991ba       dd1b12fcb6097       12 seconds ago       Running             hello-world-app                          0                   6ca8fd45e3cb7       hello-world-app-5d77478584-b8qlp
	74ab2b30401b5       e289a478ace02       21 seconds ago       Running             nginx                                    0                   ff684272e83e0       nginx
	eb5aed5c21ddf       92b11f67642b6       25 seconds ago       Exited              task-pv-container                        0                   6bd5b85cf89cf       task-pv-pod
	fac1060d73549       db2fc13d44d50       45 seconds ago       Running             gcp-auth                                 0                   36aa77a3614a5       gcp-auth-7d69788767-624pq
	53d9696e0f930       738351fd438f0       About a minute ago   Running             csi-snapshotter                          0                   1ed9895757a60       csi-hostpathplugin-95vb4
	55a6f61c9dd95       931dbfd16f87c       About a minute ago   Running             csi-provisioner                          0                   1ed9895757a60       csi-hostpathplugin-95vb4
	4540e225ce8bb       e899260153aed       About a minute ago   Running             liveness-probe                           0                   1ed9895757a60       csi-hostpathplugin-95vb4
	4eb3325185ab3       e255e073c508c       About a minute ago   Running             hostpath                                 0                   1ed9895757a60       csi-hostpathplugin-95vb4
	fb211156d3de0       88ef14a257f42       About a minute ago   Running             node-driver-registrar                    0                   1ed9895757a60       csi-hostpathplugin-95vb4
	d6bbac3b4a8e4       19a639eda60f0       About a minute ago   Running             csi-resizer                              0                   704d3339bcb1f       csi-hostpath-resizer-0
	930db20735a48       59cbb42146a37       About a minute ago   Running             csi-attacher                             0                   3a5f92a4ca24b       csi-hostpath-attacher-0
	39bae158b965c       a1ed5895ba635       About a minute ago   Running             csi-external-health-monitor-controller   0                   1ed9895757a60       csi-hostpathplugin-95vb4
	d7f600f26310e       b29d748098e32       About a minute ago   Exited              patch                                    0                   60a498f2fa768       ingress-nginx-admission-patch-bhr74
	1d23135aa6168       b29d748098e32       About a minute ago   Exited              create                                   0                   acdfbc0d59ae8       ingress-nginx-admission-create-925kp
	aac0197f6bd02       aa61ee9c70bc4       About a minute ago   Running             volume-snapshot-controller               0                   4c4822bed405b       snapshot-controller-58dbcc7b99-8jqqr
	906e37f67243d       aa61ee9c70bc4       About a minute ago   Running             volume-snapshot-controller               0                   e68f9c9f1b4e2       snapshot-controller-58dbcc7b99-zp7fp
	009ae36e21dba       31de47c733c91       About a minute ago   Running             yakd                                     0                   de6a6d91827c8       yakd-dashboard-9947fc6bf-l7wcv
	1df41747912db       1a9bd6f561b5c       About a minute ago   Running             cloud-spanner-emulator                   0                   12bc13ab5681d       cloud-spanner-emulator-5446596998-s4r7l
	4abedfa849b68       6e38f40d628db       About a minute ago   Running             storage-provisioner                      0                   5186e7fff4d15       storage-provisioner
	9d42e68d2f892       cbb01a7bd410d       2 minutes ago        Running             coredns                                  0                   1908ab18ecfe0       coredns-76f75df574-7279d
	2da6c69283523       a1d263b5dc5b0       2 minutes ago        Running             kube-proxy                               0                   b62964d46a9e5       kube-proxy-khg7b
	7657229264828       8c390d98f50c0       2 minutes ago        Running             kube-scheduler                           0                   2a8f45f359d00       kube-scheduler-addons-336680
	1f47637b4007a       6052a25da3f97       2 minutes ago        Running             kube-controller-manager                  0                   a1a7b068f30a9       kube-controller-manager-addons-336680
	5e06242d2a3d5       3861cfcd7c04c       2 minutes ago        Running             etcd                                     0                   54fe3d3839788       etcd-addons-336680
	db50b5098b481       39f995c9f1996       2 minutes ago        Running             kube-apiserver                           0                   ba9125e887cbf       kube-apiserver-addons-336680
	
	
	==> containerd <==
	Mar 27 20:01:22 addons-336680 containerd[644]: time="2024-03-27T20:01:22.586529799Z" level=info msg="RemoveContainer for \"b3f8fa541e7075840f310ccdf8eda07d3cebdeb8d47a06517b575a4a2d099394\""
	Mar 27 20:01:22 addons-336680 containerd[644]: time="2024-03-27T20:01:22.597566177Z" level=info msg="RemoveContainer for \"b3f8fa541e7075840f310ccdf8eda07d3cebdeb8d47a06517b575a4a2d099394\" returns successfully"
	Mar 27 20:01:22 addons-336680 containerd[644]: time="2024-03-27T20:01:22.598212926Z" level=error msg="ContainerStatus for \"b3f8fa541e7075840f310ccdf8eda07d3cebdeb8d47a06517b575a4a2d099394\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b3f8fa541e7075840f310ccdf8eda07d3cebdeb8d47a06517b575a4a2d099394\": not found"
	Mar 27 20:01:24 addons-336680 containerd[644]: time="2024-03-27T20:01:24.877645015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:headlamp-5485c556b-sr8zx,Uid:7a796bc5-b3fb-4341-9619-2472a7d4313f,Namespace:headlamp,Attempt:0,}"
	Mar 27 20:01:24 addons-336680 containerd[644]: time="2024-03-27T20:01:24.994730685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 27 20:01:24 addons-336680 containerd[644]: time="2024-03-27T20:01:24.994812256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 27 20:01:24 addons-336680 containerd[644]: time="2024-03-27T20:01:24.994834163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 27 20:01:24 addons-336680 containerd[644]: time="2024-03-27T20:01:24.995058075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 27 20:01:25 addons-336680 containerd[644]: time="2024-03-27T20:01:25.089090574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:headlamp-5485c556b-sr8zx,Uid:7a796bc5-b3fb-4341-9619-2472a7d4313f,Namespace:headlamp,Attempt:0,} returns sandbox id \"ab8e32c00e4a62f25fedc4c3838c9a156d60c6a8507519b9bff773b66d420191\""
	Mar 27 20:01:25 addons-336680 containerd[644]: time="2024-03-27T20:01:25.094899851Z" level=info msg="PullImage \"ghcr.io/headlamp-k8s/headlamp:v0.23.0@sha256:94e00732e1b43057a9135dafc7483781aea4a73a26cec449ed19f4d8794308d5\""
	Mar 27 20:01:27 addons-336680 containerd[644]: time="2024-03-27T20:01:27.378943526Z" level=info msg="Kill container \"c77f04a323afe8aa5d323af62e9b7fb690114197e01468269c3eb2c63fc7ad53\""
	Mar 27 20:01:27 addons-336680 containerd[644]: time="2024-03-27T20:01:27.567212503Z" level=info msg="shim disconnected" id=c77f04a323afe8aa5d323af62e9b7fb690114197e01468269c3eb2c63fc7ad53 namespace=k8s.io
	Mar 27 20:01:27 addons-336680 containerd[644]: time="2024-03-27T20:01:27.567373278Z" level=warning msg="cleaning up after shim disconnected" id=c77f04a323afe8aa5d323af62e9b7fb690114197e01468269c3eb2c63fc7ad53 namespace=k8s.io
	Mar 27 20:01:27 addons-336680 containerd[644]: time="2024-03-27T20:01:27.567394285Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Mar 27 20:01:27 addons-336680 containerd[644]: time="2024-03-27T20:01:27.655574621Z" level=info msg="StopContainer for \"c77f04a323afe8aa5d323af62e9b7fb690114197e01468269c3eb2c63fc7ad53\" returns successfully"
	Mar 27 20:01:27 addons-336680 containerd[644]: time="2024-03-27T20:01:27.656949231Z" level=info msg="StopPodSandbox for \"a5e00b6c972b96d0b1b8f4f3f8c802e3508e6c34b498ed223478ab48b454dbee\""
	Mar 27 20:01:27 addons-336680 containerd[644]: time="2024-03-27T20:01:27.657041091Z" level=info msg="Container to stop \"c77f04a323afe8aa5d323af62e9b7fb690114197e01468269c3eb2c63fc7ad53\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Mar 27 20:01:27 addons-336680 containerd[644]: time="2024-03-27T20:01:27.783736289Z" level=info msg="shim disconnected" id=a5e00b6c972b96d0b1b8f4f3f8c802e3508e6c34b498ed223478ab48b454dbee namespace=k8s.io
	Mar 27 20:01:27 addons-336680 containerd[644]: time="2024-03-27T20:01:27.784002966Z" level=warning msg="cleaning up after shim disconnected" id=a5e00b6c972b96d0b1b8f4f3f8c802e3508e6c34b498ed223478ab48b454dbee namespace=k8s.io
	Mar 27 20:01:27 addons-336680 containerd[644]: time="2024-03-27T20:01:27.784064943Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Mar 27 20:01:27 addons-336680 containerd[644]: time="2024-03-27T20:01:27.904539872Z" level=info msg="TearDown network for sandbox \"a5e00b6c972b96d0b1b8f4f3f8c802e3508e6c34b498ed223478ab48b454dbee\" successfully"
	Mar 27 20:01:27 addons-336680 containerd[644]: time="2024-03-27T20:01:27.904607677Z" level=info msg="StopPodSandbox for \"a5e00b6c972b96d0b1b8f4f3f8c802e3508e6c34b498ed223478ab48b454dbee\" returns successfully"
	Mar 27 20:01:28 addons-336680 containerd[644]: time="2024-03-27T20:01:28.645492431Z" level=info msg="RemoveContainer for \"c77f04a323afe8aa5d323af62e9b7fb690114197e01468269c3eb2c63fc7ad53\""
	Mar 27 20:01:28 addons-336680 containerd[644]: time="2024-03-27T20:01:28.677720042Z" level=info msg="RemoveContainer for \"c77f04a323afe8aa5d323af62e9b7fb690114197e01468269c3eb2c63fc7ad53\" returns successfully"
	Mar 27 20:01:28 addons-336680 containerd[644]: time="2024-03-27T20:01:28.678272131Z" level=error msg="ContainerStatus for \"c77f04a323afe8aa5d323af62e9b7fb690114197e01468269c3eb2c63fc7ad53\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c77f04a323afe8aa5d323af62e9b7fb690114197e01468269c3eb2c63fc7ad53\": not found"
	
	
	==> coredns [9d42e68d2f892e20f8ecb761c1a278ab3767c25d7a296229eadf499df68dc58d] <==
	[INFO] 10.244.0.21:40010 - 3879 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00006543s
	[INFO] 10.244.0.21:52396 - 34462 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000244883s
	[INFO] 10.244.0.21:40010 - 2310 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000032348s
	[INFO] 10.244.0.21:52396 - 2815 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000137114s
	[INFO] 10.244.0.21:52396 - 39553 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000092275s
	[INFO] 10.244.0.21:40010 - 8306 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000043059s
	[INFO] 10.244.0.21:40010 - 45917 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000099188s
	[INFO] 10.244.0.21:52396 - 37284 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000081366s
	[INFO] 10.244.0.21:40010 - 62119 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000043251s
	[INFO] 10.244.0.21:40010 - 60863 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000159643s
	[INFO] 10.244.0.21:52396 - 6821 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000046382s
	[INFO] 10.244.0.21:49057 - 43554 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000089379s
	[INFO] 10.244.0.21:56636 - 6308 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000034325s
	[INFO] 10.244.0.21:56636 - 30671 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000044105s
	[INFO] 10.244.0.21:49057 - 28441 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000019948s
	[INFO] 10.244.0.21:56636 - 27086 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000033116s
	[INFO] 10.244.0.21:49057 - 53599 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000022909s
	[INFO] 10.244.0.21:49057 - 44890 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000044722s
	[INFO] 10.244.0.21:56636 - 10225 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000323601s
	[INFO] 10.244.0.21:56636 - 48745 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000031761s
	[INFO] 10.244.0.21:49057 - 44048 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000020635s
	[INFO] 10.244.0.21:56636 - 38353 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000184548s
	[INFO] 10.244.0.21:49057 - 60285 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000031336s
	[INFO] 10.244.0.21:56636 - 12046 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000053076s
	[INFO] 10.244.0.21:49057 - 26463 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000018684s
	
	
	==> describe nodes <==
	Name:               addons-336680
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-336680
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd5228225874e763d59e7e8bf88a02e145755a81
	                    minikube.k8s.io/name=addons-336680
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_27T19_59_09_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-336680
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-336680"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 27 Mar 2024 19:59:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-336680
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 27 Mar 2024 20:01:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 27 Mar 2024 20:01:12 +0000   Wed, 27 Mar 2024 19:59:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 27 Mar 2024 20:01:12 +0000   Wed, 27 Mar 2024 19:59:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 27 Mar 2024 20:01:12 +0000   Wed, 27 Mar 2024 19:59:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 27 Mar 2024 20:01:12 +0000   Wed, 27 Mar 2024 19:59:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.8
	  Hostname:    addons-336680
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 80a50169ddd845e2bdc2da058b51429d
	  System UUID:                80a50169-ddd8-45e2-bdc2-da058b51429d
	  Boot ID:                    beb683f1-8b58-4632-9eac-1a7327afc4db
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.14
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5446596998-s4r7l    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m4s
	  default                     hello-world-app-5d77478584-b8qlp           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24s
	  gcp-auth                    gcp-auth-7d69788767-624pq                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         114s
	  headlamp                    headlamp-5485c556b-sr8zx                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 coredns-76f75df574-7279d                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m7s
	  kube-system                 csi-hostpath-attacher-0                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kube-system                 csi-hostpath-resizer-0                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         115s
	  kube-system                 csi-hostpathplugin-95vb4                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         115s
	  kube-system                 etcd-addons-336680                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m20s
	  kube-system                 kube-apiserver-addons-336680               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m20s
	  kube-system                 kube-controller-manager-addons-336680      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m20s
	  kube-system                 kube-proxy-khg7b                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m8s
	  kube-system                 kube-scheduler-addons-336680               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m20s
	  kube-system                 snapshot-controller-58dbcc7b99-8jqqr       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 snapshot-controller-58dbcc7b99-zp7fp       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m1s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-l7wcv             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     2m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m5s                   kube-proxy       
	  Normal  Starting                 2m27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m27s (x8 over 2m27s)  kubelet          Node addons-336680 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m27s (x8 over 2m27s)  kubelet          Node addons-336680 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m27s (x7 over 2m27s)  kubelet          Node addons-336680 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m20s                  kubelet          Node addons-336680 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m20s                  kubelet          Node addons-336680 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m20s                  kubelet          Node addons-336680 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m20s                  kubelet          Node addons-336680 status is now: NodeReady
	  Normal  RegisteredNode           2m8s                   node-controller  Node addons-336680 event: Registered Node addons-336680 in Controller
	
	
	==> dmesg <==
	[Mar27 19:59] systemd-fstab-generator[863]: Ignoring "noauto" option for root device
	[  +0.056664] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.233121] systemd-fstab-generator[1233]: Ignoring "noauto" option for root device
	[  +0.079361] kauditd_printk_skb: 69 callbacks suppressed
	[ +13.364080] systemd-fstab-generator[1434]: Ignoring "noauto" option for root device
	[  +0.130176] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.257238] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.013855] kauditd_printk_skb: 104 callbacks suppressed
	[  +5.554432] kauditd_printk_skb: 80 callbacks suppressed
	[  +9.231697] kauditd_printk_skb: 9 callbacks suppressed
	[  +9.136697] kauditd_printk_skb: 16 callbacks suppressed
	[Mar27 20:00] kauditd_printk_skb: 29 callbacks suppressed
	[  +7.915273] kauditd_printk_skb: 1 callbacks suppressed
	[  +6.098128] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.464261] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.392619] kauditd_printk_skb: 51 callbacks suppressed
	[ +14.904761] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.193824] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.707465] kauditd_printk_skb: 45 callbacks suppressed
	[  +5.360616] kauditd_printk_skb: 52 callbacks suppressed
	[Mar27 20:01] kauditd_printk_skb: 52 callbacks suppressed
	[  +7.284229] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.136125] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.953841] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.056241] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [5e06242d2a3d5e15cbb12aaf73eaf415da53b2b37c3bc1710d3d9d65dcaa2116] <==
	{"level":"warn","ts":"2024-03-27T20:00:09.92989Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-27T20:00:09.629728Z","time spent":"300.157542ms","remote":"127.0.0.1:54650","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-03-27T20:00:09.930169Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.550334ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85465"}
	{"level":"info","ts":"2024-03-27T20:00:09.930192Z","caller":"traceutil/trace.go:171","msg":"trace[486214960] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1010; }","duration":"204.635483ms","start":"2024-03-27T20:00:09.72555Z","end":"2024-03-27T20:00:09.930185Z","steps":["trace[486214960] 'range keys from in-memory index tree'  (duration: 204.397323ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-27T20:00:09.930432Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"200.324127ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-27T20:00:09.930459Z","caller":"traceutil/trace.go:171","msg":"trace[1472620087] range","detail":"{range_begin:/registry/poddisruptionbudgets/; range_end:/registry/poddisruptionbudgets0; response_count:0; response_revision:1010; }","duration":"200.375078ms","start":"2024-03-27T20:00:09.730074Z","end":"2024-03-27T20:00:09.930449Z","steps":["trace[1472620087] 'count revisions from in-memory index tree'  (duration: 200.285655ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-27T20:00:09.93062Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.721381ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14065"}
	{"level":"info","ts":"2024-03-27T20:00:09.930634Z","caller":"traceutil/trace.go:171","msg":"trace[653357385] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1010; }","duration":"114.756259ms","start":"2024-03-27T20:00:09.815874Z","end":"2024-03-27T20:00:09.93063Z","steps":["trace[653357385] 'range keys from in-memory index tree'  (duration: 114.647129ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-27T20:00:15.241436Z","caller":"traceutil/trace.go:171","msg":"trace[2104045945] linearizableReadLoop","detail":"{readStateIndex:1074; appliedIndex:1073; }","duration":"247.805133ms","start":"2024-03-27T20:00:14.993618Z","end":"2024-03-27T20:00:15.241423Z","steps":["trace[2104045945] 'read index received'  (duration: 247.598147ms)","trace[2104045945] 'applied index is now lower than readState.Index'  (duration: 206.561µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-27T20:00:15.241672Z","caller":"traceutil/trace.go:171","msg":"trace[1742456545] transaction","detail":"{read_only:false; response_revision:1044; number_of_response:1; }","duration":"275.884684ms","start":"2024-03-27T20:00:14.965779Z","end":"2024-03-27T20:00:15.241664Z","steps":["trace[1742456545] 'process raft request'  (duration: 275.488414ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-27T20:00:15.241866Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"248.23009ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11155"}
	{"level":"info","ts":"2024-03-27T20:00:15.241895Z","caller":"traceutil/trace.go:171","msg":"trace[1113788745] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1044; }","duration":"248.301586ms","start":"2024-03-27T20:00:14.993586Z","end":"2024-03-27T20:00:15.241888Z","steps":["trace[1113788745] 'agreement among raft nodes before linearized reading'  (duration: 248.193522ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-27T20:00:20.390821Z","caller":"traceutil/trace.go:171","msg":"trace[516155193] linearizableReadLoop","detail":"{readStateIndex:1118; appliedIndex:1117; }","duration":"209.374473ms","start":"2024-03-27T20:00:20.181423Z","end":"2024-03-27T20:00:20.390797Z","steps":["trace[516155193] 'read index received'  (duration: 209.134541ms)","trace[516155193] 'applied index is now lower than readState.Index'  (duration: 236.221µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-27T20:00:20.391498Z","caller":"traceutil/trace.go:171","msg":"trace[1219311981] transaction","detail":"{read_only:false; response_revision:1087; number_of_response:1; }","duration":"277.037951ms","start":"2024-03-27T20:00:20.114447Z","end":"2024-03-27T20:00:20.391485Z","steps":["trace[1219311981] 'process raft request'  (duration: 276.1886ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-27T20:00:20.397151Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.213609ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85655"}
	{"level":"info","ts":"2024-03-27T20:00:20.397183Z","caller":"traceutil/trace.go:171","msg":"trace[496494302] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1087; }","duration":"171.322066ms","start":"2024-03-27T20:00:20.225852Z","end":"2024-03-27T20:00:20.397174Z","steps":["trace[496494302] 'agreement among raft nodes before linearized reading'  (duration: 171.091442ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-27T20:00:20.405051Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.612378ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/gcp-auth/gcp-auth-certs\" ","response":"range_response_count:1 size:1746"}
	{"level":"info","ts":"2024-03-27T20:00:20.405115Z","caller":"traceutil/trace.go:171","msg":"trace[194454974] range","detail":"{range_begin:/registry/secrets/gcp-auth/gcp-auth-certs; range_end:; response_count:1; response_revision:1087; }","duration":"223.707172ms","start":"2024-03-27T20:00:20.181396Z","end":"2024-03-27T20:00:20.405103Z","steps":["trace[194454974] 'agreement among raft nodes before linearized reading'  (duration: 211.404556ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-27T20:00:48.851148Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"273.695236ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/yakd-dashboard/\" range_end:\"/registry/pods/yakd-dashboard0\" ","response":"range_response_count:1 size:4333"}
	{"level":"info","ts":"2024-03-27T20:00:48.851257Z","caller":"traceutil/trace.go:171","msg":"trace[984338749] range","detail":"{range_begin:/registry/pods/yakd-dashboard/; range_end:/registry/pods/yakd-dashboard0; response_count:1; response_revision:1230; }","duration":"273.860319ms","start":"2024-03-27T20:00:48.577376Z","end":"2024-03-27T20:00:48.851237Z","steps":["trace[984338749] 'range keys from in-memory index tree'  (duration: 273.500582ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-27T20:00:48.853185Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"245.950031ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:86153"}
	{"level":"info","ts":"2024-03-27T20:00:48.853287Z","caller":"traceutil/trace.go:171","msg":"trace[689881647] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1230; }","duration":"246.086939ms","start":"2024-03-27T20:00:48.607188Z","end":"2024-03-27T20:00:48.853275Z","steps":["trace[689881647] 'range keys from in-memory index tree'  (duration: 245.661469ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-27T20:00:48.853575Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.092654ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-27T20:00:48.853632Z","caller":"traceutil/trace.go:171","msg":"trace[1063394852] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1230; }","duration":"227.17905ms","start":"2024-03-27T20:00:48.626445Z","end":"2024-03-27T20:00:48.853624Z","steps":["trace[1063394852] 'range keys from in-memory index tree'  (duration: 227.032637ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-27T20:00:48.853814Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"215.782592ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-03-27T20:00:48.853865Z","caller":"traceutil/trace.go:171","msg":"trace[1479771427] range","detail":"{range_begin:/registry/controllerrevisions/; range_end:/registry/controllerrevisions0; response_count:0; response_revision:1230; }","duration":"215.874761ms","start":"2024-03-27T20:00:48.637983Z","end":"2024-03-27T20:00:48.853858Z","steps":["trace[1479771427] 'count revisions from in-memory index tree'  (duration: 215.727133ms)"],"step_count":1}
	
	
	==> gcp-auth [fac1060d735495477eec582991d1e4944628e322fd04b4df3f35333f815ebe21] <==
	2024/03/27 20:00:44 GCP Auth Webhook started!
	2024/03/27 20:00:44 Ready to marshal response ...
	2024/03/27 20:00:44 Ready to write response ...
	2024/03/27 20:00:44 Ready to marshal response ...
	2024/03/27 20:00:44 Ready to write response ...
	2024/03/27 20:00:54 Ready to marshal response ...
	2024/03/27 20:00:54 Ready to write response ...
	2024/03/27 20:00:56 Ready to marshal response ...
	2024/03/27 20:00:56 Ready to write response ...
	2024/03/27 20:00:56 Ready to marshal response ...
	2024/03/27 20:00:56 Ready to write response ...
	2024/03/27 20:01:02 Ready to marshal response ...
	2024/03/27 20:01:02 Ready to write response ...
	2024/03/27 20:01:02 Ready to marshal response ...
	2024/03/27 20:01:02 Ready to write response ...
	2024/03/27 20:01:05 Ready to marshal response ...
	2024/03/27 20:01:05 Ready to write response ...
	2024/03/27 20:01:14 Ready to marshal response ...
	2024/03/27 20:01:14 Ready to write response ...
	2024/03/27 20:01:24 Ready to marshal response ...
	2024/03/27 20:01:24 Ready to write response ...
	2024/03/27 20:01:24 Ready to marshal response ...
	2024/03/27 20:01:24 Ready to write response ...
	2024/03/27 20:01:24 Ready to marshal response ...
	2024/03/27 20:01:24 Ready to write response ...
	
	
	==> kernel <==
	 20:01:29 up 3 min,  0 users,  load average: 3.10, 1.73, 0.69
	Linux addons-336680 5.10.207 #1 SMP Wed Mar 20 21:49:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [db50b5098b481dd3d18321ac79b3c4998a9ede68d0ec9666337dd6626b24911b] <==
	I0327 19:59:36.560132       1 trace.go:236] Trace[1857862267]: "Create" accept:application/vnd.kubernetes.protobuf, */*,audit-id:6633744c-9380-4db1-9429-d3df3a748e73,client:192.168.39.8,api-group:,api-version:v1,name:,subresource:,namespace:gcp-auth,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/gcp-auth/pods,user-agent:kube-controller-manager/v1.29.3 (linux/amd64) kubernetes/6813625/system:serviceaccount:kube-system:job-controller,verb:POST (27-Mar-2024 19:59:35.481) (total time: 1079ms):
	Trace[1857862267]: ["Call mutating webhook" configuration:gcp-auth-webhook-cfg,webhook:gcp-auth-mutate.k8s.io,resource:/v1, Resource=pods,subresource:,operation:CREATE,UID:05a4a548-17d2-4e81-82c7-4de246f75d2b 1071ms (19:59:35.488)]
	Trace[1857862267]: [1.079068725s] [1.079068725s] END
	W0327 20:00:03.910741       1 handler_proxy.go:93] no RequestInfo found in the context
	E0327 20:00:03.912590       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0327 20:00:03.912519       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.119.194:443/apis/metrics.k8s.io/v1beta1: Get "https://10.103.119.194:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.103.119.194:443: connect: connection refused
	E0327 20:00:03.914882       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.119.194:443/apis/metrics.k8s.io/v1beta1: Get "https://10.103.119.194:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.103.119.194:443: connect: connection refused
	E0327 20:00:03.952986       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.119.194:443/apis/metrics.k8s.io/v1beta1: bad status from https://10.103.119.194:443/apis/metrics.k8s.io/v1beta1: 403
	W0327 20:00:03.953497       1 handler_proxy.go:93] no RequestInfo found in the context
	E0327 20:00:03.953647       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0327 20:00:03.964187       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.119.194:443/apis/metrics.k8s.io/v1beta1: bad status from https://10.103.119.194:443/apis/metrics.k8s.io/v1beta1: 403
	I0327 20:00:03.992908       1 handler.go:275] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0327 20:00:59.624180       1 upgradeaware.go:425] Error proxying data from client to backend: read tcp 192.168.39.8:8443->10.244.0.27:38956: read: connection reset by peer
	E0327 20:01:03.254936       1 upgradeaware.go:425] Error proxying data from client to backend: read tcp 192.168.39.8:8443->10.244.0.28:39116: read: connection reset by peer
	I0327 20:01:04.923614       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0327 20:01:05.551966       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0327 20:01:05.779210       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.65.214"}
	I0327 20:01:10.013010       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0327 20:01:11.153868       1 handler.go:275] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0327 20:01:12.213449       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0327 20:01:12.519926       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0327 20:01:14.308594       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.159.91"}
	I0327 20:01:24.470025       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.171.194"}
	
	
	==> kube-controller-manager [1f47637b4007adcd4ecc74d7b9847c1f2da8bba8470036fc1ef047436bafee76] <==
	I0327 20:01:14.238584       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="30.414703ms"
	I0327 20:01:14.239566       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="189.664µs"
	W0327 20:01:15.472093       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 20:01:15.472257       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0327 20:01:16.526813       1 job_controller.go:554] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0327 20:01:16.548106       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-65496f9567" duration="3.959µs"
	I0327 20:01:16.555643       1 job_controller.go:554] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0327 20:01:16.625764       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="23.078256ms"
	I0327 20:01:16.633894       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="65.518µs"
	W0327 20:01:18.970858       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 20:01:18.970997       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0327 20:01:21.602971       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	I0327 20:01:21.823256       1 event.go:376] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0327 20:01:22.169248       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0327 20:01:22.169292       1 shared_informer.go:318] Caches are synced for resource quota
	I0327 20:01:22.665021       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0327 20:01:22.665170       1 shared_informer.go:318] Caches are synced for garbage collector
	I0327 20:01:24.519922       1 event.go:376] "Event occurred" object="headlamp/headlamp" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set headlamp-5485c556b to 1"
	I0327 20:01:24.563852       1 event.go:376] "Event occurred" object="headlamp/headlamp-5485c556b" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: headlamp-5485c556b-sr8zx"
	I0327 20:01:24.584475       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-5485c556b" duration="64.743831ms"
	I0327 20:01:24.618685       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-5485c556b" duration="34.063352ms"
	I0327 20:01:24.619215       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-5485c556b" duration="91.656µs"
	I0327 20:01:26.484538       1 namespace_controller.go:182] "Namespace has been deleted" namespace="ingress-nginx"
	W0327 20:01:28.429911       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 20:01:28.429976       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [2da6c692835233f34e2ae4fc56929b13f6738801d8f83ad4dea38bd013f8cd4c] <==
	I0327 19:59:23.673684       1 server_others.go:72] "Using iptables proxy"
	I0327 19:59:23.702650       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.8"]
	I0327 19:59:23.814438       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0327 19:59:23.814488       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0327 19:59:23.814507       1 server_others.go:168] "Using iptables Proxier"
	I0327 19:59:23.827453       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0327 19:59:23.827846       1 server.go:865] "Version info" version="v1.29.3"
	I0327 19:59:23.827886       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0327 19:59:23.829053       1 config.go:188] "Starting service config controller"
	I0327 19:59:23.829101       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0327 19:59:23.829131       1 config.go:97] "Starting endpoint slice config controller"
	I0327 19:59:23.829135       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0327 19:59:23.829732       1 config.go:315] "Starting node config controller"
	I0327 19:59:23.829739       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0327 19:59:23.929840       1 shared_informer.go:318] Caches are synced for node config
	I0327 19:59:23.929942       1 shared_informer.go:318] Caches are synced for service config
	I0327 19:59:23.929998       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7657229264828dcf7fff2df6bb6c9e8e4e5979e174b2b313b95d22d1200a5bb1] <==
	W0327 19:59:06.604264       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0327 19:59:06.604483       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0327 19:59:06.610991       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0327 19:59:06.611280       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0327 19:59:06.635259       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0327 19:59:06.635461       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0327 19:59:06.714914       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0327 19:59:06.714970       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0327 19:59:06.752749       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0327 19:59:06.753256       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0327 19:59:06.922016       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0327 19:59:06.925924       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0327 19:59:06.927761       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0327 19:59:06.928081       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0327 19:59:06.993158       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0327 19:59:06.993550       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0327 19:59:07.006767       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0327 19:59:07.007203       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0327 19:59:07.014409       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0327 19:59:07.014717       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0327 19:59:07.093394       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0327 19:59:07.093755       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0327 19:59:07.109991       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0327 19:59:07.110044       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0327 19:59:09.434237       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 27 20:01:22 addons-336680 kubelet[1240]: I0327 20:01:22.718720    1240 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-fkxbz\" (UniqueName: \"kubernetes.io/projected/71741d27-b1da-43af-84bb-3908f2dce37f-kube-api-access-fkxbz\") on node \"addons-336680\" DevicePath \"\""
	Mar 27 20:01:22 addons-336680 kubelet[1240]: I0327 20:01:22.718917    1240 reconciler_common.go:300] "Volume detached for volume \"device-plugin\" (UniqueName: \"kubernetes.io/host-path/71741d27-b1da-43af-84bb-3908f2dce37f-device-plugin\") on node \"addons-336680\" DevicePath \"\""
	Mar 27 20:01:23 addons-336680 kubelet[1240]: I0327 20:01:23.305702    1240 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71741d27-b1da-43af-84bb-3908f2dce37f" path="/var/lib/kubelet/pods/71741d27-b1da-43af-84bb-3908f2dce37f/volumes"
	Mar 27 20:01:24 addons-336680 kubelet[1240]: I0327 20:01:24.573572    1240 topology_manager.go:215] "Topology Admit Handler" podUID="7a796bc5-b3fb-4341-9619-2472a7d4313f" podNamespace="headlamp" podName="headlamp-5485c556b-sr8zx"
	Mar 27 20:01:24 addons-336680 kubelet[1240]: E0327 20:01:24.574106    1240 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fa2b7220-3ce8-491a-a870-a6742b2f81dd" containerName="minikube-ingress-dns"
	Mar 27 20:01:24 addons-336680 kubelet[1240]: E0327 20:01:24.574184    1240 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ecdd264e-780a-4036-ab67-cce1829a7cf0" containerName="gadget"
	Mar 27 20:01:24 addons-336680 kubelet[1240]: E0327 20:01:24.574238    1240 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="71741d27-b1da-43af-84bb-3908f2dce37f" containerName="nvidia-device-plugin-ctr"
	Mar 27 20:01:24 addons-336680 kubelet[1240]: E0327 20:01:24.574286    1240 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0efce137-2b8f-4084-8f11-7b0820b69714" containerName="controller"
	Mar 27 20:01:24 addons-336680 kubelet[1240]: I0327 20:01:24.574526    1240 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecdd264e-780a-4036-ab67-cce1829a7cf0" containerName="gadget"
	Mar 27 20:01:24 addons-336680 kubelet[1240]: I0327 20:01:24.574587    1240 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa2b7220-3ce8-491a-a870-a6742b2f81dd" containerName="minikube-ingress-dns"
	Mar 27 20:01:24 addons-336680 kubelet[1240]: I0327 20:01:24.574631    1240 memory_manager.go:354] "RemoveStaleState removing state" podUID="71741d27-b1da-43af-84bb-3908f2dce37f" containerName="nvidia-device-plugin-ctr"
	Mar 27 20:01:24 addons-336680 kubelet[1240]: I0327 20:01:24.574672    1240 memory_manager.go:354] "RemoveStaleState removing state" podUID="0efce137-2b8f-4084-8f11-7b0820b69714" containerName="controller"
	Mar 27 20:01:24 addons-336680 kubelet[1240]: I0327 20:01:24.636252    1240 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwz52\" (UniqueName: \"kubernetes.io/projected/7a796bc5-b3fb-4341-9619-2472a7d4313f-kube-api-access-qwz52\") pod \"headlamp-5485c556b-sr8zx\" (UID: \"7a796bc5-b3fb-4341-9619-2472a7d4313f\") " pod="headlamp/headlamp-5485c556b-sr8zx"
	Mar 27 20:01:24 addons-336680 kubelet[1240]: I0327 20:01:24.636396    1240 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7a796bc5-b3fb-4341-9619-2472a7d4313f-gcp-creds\") pod \"headlamp-5485c556b-sr8zx\" (UID: \"7a796bc5-b3fb-4341-9619-2472a7d4313f\") " pod="headlamp/headlamp-5485c556b-sr8zx"
	Mar 27 20:01:27 addons-336680 kubelet[1240]: I0327 20:01:27.960646    1240 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8s48l\" (UniqueName: \"kubernetes.io/projected/9d36e80d-6b01-4f18-a83d-1b090d1c7529-kube-api-access-8s48l\") pod \"9d36e80d-6b01-4f18-a83d-1b090d1c7529\" (UID: \"9d36e80d-6b01-4f18-a83d-1b090d1c7529\") "
	Mar 27 20:01:27 addons-336680 kubelet[1240]: I0327 20:01:27.960719    1240 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d36e80d-6b01-4f18-a83d-1b090d1c7529-config-volume\") pod \"9d36e80d-6b01-4f18-a83d-1b090d1c7529\" (UID: \"9d36e80d-6b01-4f18-a83d-1b090d1c7529\") "
	Mar 27 20:01:27 addons-336680 kubelet[1240]: I0327 20:01:27.961746    1240 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d36e80d-6b01-4f18-a83d-1b090d1c7529-config-volume" (OuterVolumeSpecName: "config-volume") pod "9d36e80d-6b01-4f18-a83d-1b090d1c7529" (UID: "9d36e80d-6b01-4f18-a83d-1b090d1c7529"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Mar 27 20:01:27 addons-336680 kubelet[1240]: I0327 20:01:27.969229    1240 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d36e80d-6b01-4f18-a83d-1b090d1c7529-kube-api-access-8s48l" (OuterVolumeSpecName: "kube-api-access-8s48l") pod "9d36e80d-6b01-4f18-a83d-1b090d1c7529" (UID: "9d36e80d-6b01-4f18-a83d-1b090d1c7529"). InnerVolumeSpecName "kube-api-access-8s48l". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 27 20:01:28 addons-336680 kubelet[1240]: I0327 20:01:28.061803    1240 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8s48l\" (UniqueName: \"kubernetes.io/projected/9d36e80d-6b01-4f18-a83d-1b090d1c7529-kube-api-access-8s48l\") on node \"addons-336680\" DevicePath \"\""
	Mar 27 20:01:28 addons-336680 kubelet[1240]: I0327 20:01:28.061860    1240 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d36e80d-6b01-4f18-a83d-1b090d1c7529-config-volume\") on node \"addons-336680\" DevicePath \"\""
	Mar 27 20:01:28 addons-336680 kubelet[1240]: I0327 20:01:28.630633    1240 scope.go:117] "RemoveContainer" containerID="c77f04a323afe8aa5d323af62e9b7fb690114197e01468269c3eb2c63fc7ad53"
	Mar 27 20:01:28 addons-336680 kubelet[1240]: I0327 20:01:28.677993    1240 scope.go:117] "RemoveContainer" containerID="c77f04a323afe8aa5d323af62e9b7fb690114197e01468269c3eb2c63fc7ad53"
	Mar 27 20:01:28 addons-336680 kubelet[1240]: E0327 20:01:28.680934    1240 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c77f04a323afe8aa5d323af62e9b7fb690114197e01468269c3eb2c63fc7ad53\": not found" containerID="c77f04a323afe8aa5d323af62e9b7fb690114197e01468269c3eb2c63fc7ad53"
	Mar 27 20:01:28 addons-336680 kubelet[1240]: I0327 20:01:28.681002    1240 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c77f04a323afe8aa5d323af62e9b7fb690114197e01468269c3eb2c63fc7ad53"} err="failed to get container status \"c77f04a323afe8aa5d323af62e9b7fb690114197e01468269c3eb2c63fc7ad53\": rpc error: code = NotFound desc = an error occurred when try to find container \"c77f04a323afe8aa5d323af62e9b7fb690114197e01468269c3eb2c63fc7ad53\": not found"
	Mar 27 20:01:29 addons-336680 kubelet[1240]: I0327 20:01:29.309576    1240 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d36e80d-6b01-4f18-a83d-1b090d1c7529" path="/var/lib/kubelet/pods/9d36e80d-6b01-4f18-a83d-1b090d1c7529/volumes"
	
	
	==> storage-provisioner [4abedfa849b68245967dc9d5c18a8f58f71ebc6ab567bbd5db83f64986ab5da9] <==
	I0327 19:59:31.855670       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0327 19:59:31.990555       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0327 19:59:31.990645       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0327 19:59:32.211142       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0327 19:59:32.230007       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-336680_636e993b-f019-48ee-88ee-c983d3b75a71!
	I0327 19:59:32.276872       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8cd2f21b-34ed-46f3-920c-8fb1728f69ca", APIVersion:"v1", ResourceVersion:"728", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-336680_636e993b-f019-48ee-88ee-c983d3b75a71 became leader
	I0327 19:59:32.560498       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-336680_636e993b-f019-48ee-88ee-c983d3b75a71!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-336680 -n addons-336680
helpers_test.go:261: (dbg) Run:  kubectl --context addons-336680 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: headlamp-5485c556b-sr8zx
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/CloudSpanner]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-336680 describe pod headlamp-5485c556b-sr8zx
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-336680 describe pod headlamp-5485c556b-sr8zx: exit status 1 (66.457019ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "headlamp-5485c556b-sr8zx" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-336680 describe pod headlamp-5485c556b-sr8zx: exit status 1
--- FAIL: TestAddons/parallel/CloudSpanner (8.02s)

                                                
                                    

Test pass (293/333)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.86
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.15
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.29.3/json-events 5.04
13 TestDownloadOnly/v1.29.3/preload-exists 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.08
18 TestDownloadOnly/v1.29.3/DeleteAll 0.15
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.30.0-beta.0/json-events 7.86
22 TestDownloadOnly/v1.30.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.30.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.30.0-beta.0/DeleteAll 0.14
28 TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.58
31 TestOffline 118.98
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 140.18
38 TestAddons/parallel/Registry 13.99
39 TestAddons/parallel/Ingress 18.29
40 TestAddons/parallel/InspektorGadget 12.29
41 TestAddons/parallel/MetricsServer 5.97
42 TestAddons/parallel/HelmTiller 14.66
44 TestAddons/parallel/CSI 65.25
45 TestAddons/parallel/Headlamp 14.05
47 TestAddons/parallel/LocalPath 56.29
48 TestAddons/parallel/NvidiaDevicePlugin 5.55
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.15
53 TestAddons/StoppedEnableDisable 92.8
54 TestCertOptions 70.27
55 TestCertExpiration 302.36
57 TestForceSystemdFlag 64.15
58 TestForceSystemdEnv 49.75
60 TestKVMDriverInstallOrUpdate 1.22
64 TestErrorSpam/setup 44.3
65 TestErrorSpam/start 0.39
66 TestErrorSpam/status 0.8
67 TestErrorSpam/pause 1.67
68 TestErrorSpam/unpause 1.87
69 TestErrorSpam/stop 4.65
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 61.01
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 41.84
76 TestFunctional/serial/KubeContext 0.05
77 TestFunctional/serial/KubectlGetPods 0.09
80 TestFunctional/serial/CacheCmd/cache/add_remote 4.05
81 TestFunctional/serial/CacheCmd/cache/add_local 1.4
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.92
86 TestFunctional/serial/CacheCmd/cache/delete 0.13
87 TestFunctional/serial/MinikubeKubectlCmd 0.13
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
89 TestFunctional/serial/ExtraConfig 35.95
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.58
92 TestFunctional/serial/LogsFileCmd 1.57
93 TestFunctional/serial/InvalidService 3.63
95 TestFunctional/parallel/ConfigCmd 0.5
96 TestFunctional/parallel/DashboardCmd 15.19
97 TestFunctional/parallel/DryRun 0.34
98 TestFunctional/parallel/InternationalLanguage 0.16
99 TestFunctional/parallel/StatusCmd 0.88
103 TestFunctional/parallel/ServiceCmdConnect 41.98
104 TestFunctional/parallel/AddonsCmd 0.23
105 TestFunctional/parallel/PersistentVolumeClaim 27.53
107 TestFunctional/parallel/SSHCmd 0.45
108 TestFunctional/parallel/CpCmd 1.53
109 TestFunctional/parallel/MySQL 37.36
110 TestFunctional/parallel/FileSync 0.23
111 TestFunctional/parallel/CertSync 1.46
115 TestFunctional/parallel/NodeLabels 0.07
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.43
119 TestFunctional/parallel/License 0.18
120 TestFunctional/parallel/MountCmd/any-port 15.64
121 TestFunctional/parallel/ServiceCmd/DeployApp 12.24
122 TestFunctional/parallel/ProfileCmd/profile_not_create 0.33
123 TestFunctional/parallel/ProfileCmd/profile_list 0.32
124 TestFunctional/parallel/ProfileCmd/profile_json_output 0.33
125 TestFunctional/parallel/Version/short 0.06
126 TestFunctional/parallel/Version/components 0.58
127 TestFunctional/parallel/ServiceCmd/List 0.48
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.5
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.46
130 TestFunctional/parallel/ServiceCmd/Format 0.35
131 TestFunctional/parallel/ServiceCmd/URL 0.37
132 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
133 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
134 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
135 TestFunctional/parallel/MountCmd/specific-port 1.74
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
140 TestFunctional/parallel/ImageCommands/ImageBuild 2.97
141 TestFunctional/parallel/ImageCommands/Setup 1.05
142 TestFunctional/parallel/MountCmd/VerifyCleanup 1.88
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 7.09
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.53
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.29
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.36
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.83
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.15
159 TestFunctional/delete_addon-resizer_images 0.07
160 TestFunctional/delete_my-image_image 0.01
161 TestFunctional/delete_minikube_cached_images 0.01
165 TestMultiControlPlane/serial/StartCluster 199.81
166 TestMultiControlPlane/serial/DeployApp 6.98
167 TestMultiControlPlane/serial/PingHostFromPods 1.44
168 TestMultiControlPlane/serial/AddWorkerNode 43.13
169 TestMultiControlPlane/serial/NodeLabels 0.07
170 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.59
171 TestMultiControlPlane/serial/CopyFile 14.39
172 TestMultiControlPlane/serial/StopSecondaryNode 93.2
173 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.44
174 TestMultiControlPlane/serial/RestartSecondaryNode 44.71
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.59
176 TestMultiControlPlane/serial/RestartClusterKeepsNodes 440.22
177 TestMultiControlPlane/serial/DeleteSecondaryNode 7.99
178 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.4
179 TestMultiControlPlane/serial/StopCluster 276.54
180 TestMultiControlPlane/serial/RestartCluster 150.63
181 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.42
182 TestMultiControlPlane/serial/AddSecondaryNode 71.45
183 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.61
187 TestJSONOutput/start/Command 100.04
188 TestJSONOutput/start/Audit 0
190 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/pause/Command 0.75
194 TestJSONOutput/pause/Audit 0
196 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/unpause/Command 0.68
200 TestJSONOutput/unpause/Audit 0
202 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/stop/Command 7.38
206 TestJSONOutput/stop/Audit 0
208 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
210 TestErrorJSONOutput 0.22
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 95.94
219 TestMountStart/serial/StartWithMountFirst 28.06
220 TestMountStart/serial/VerifyMountFirst 0.4
221 TestMountStart/serial/StartWithMountSecond 27.33
222 TestMountStart/serial/VerifyMountSecond 0.41
223 TestMountStart/serial/DeleteFirst 0.77
224 TestMountStart/serial/VerifyMountPostDelete 0.43
225 TestMountStart/serial/Stop 1.31
226 TestMountStart/serial/RestartStopped 21.73
227 TestMountStart/serial/VerifyMountPostStop 0.41
230 TestMultiNode/serial/FreshStart2Nodes 98.92
231 TestMultiNode/serial/DeployApp2Nodes 4.16
232 TestMultiNode/serial/PingHostFrom2Pods 0.93
233 TestMultiNode/serial/AddNode 41.42
234 TestMultiNode/serial/MultiNodeLabels 0.07
235 TestMultiNode/serial/ProfileList 0.25
236 TestMultiNode/serial/CopyFile 7.7
237 TestMultiNode/serial/StopNode 2.32
238 TestMultiNode/serial/StartAfterStop 25.84
239 TestMultiNode/serial/RestartKeepsNodes 292.81
240 TestMultiNode/serial/DeleteNode 2.37
241 TestMultiNode/serial/StopMultiNode 184.12
242 TestMultiNode/serial/RestartMultiNode 82.47
243 TestMultiNode/serial/ValidateNameConflict 48.15
248 TestPreload 262.41
250 TestScheduledStopUnix 116.52
254 TestRunningBinaryUpgrade 207.14
256 TestKubernetesUpgrade 188.13
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 97.42
268 TestNetworkPlugins/group/false 6.23
272 TestNoKubernetes/serial/StartWithStopK8s 50.49
273 TestNoKubernetes/serial/Start 36.96
274 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
275 TestNoKubernetes/serial/ProfileList 18.93
276 TestNoKubernetes/serial/Stop 1.39
277 TestNoKubernetes/serial/StartNoArgs 30.88
278 TestStoppedBinaryUpgrade/Setup 0.58
279 TestStoppedBinaryUpgrade/Upgrade 197.31
280 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
289 TestPause/serial/Start 119.71
290 TestNetworkPlugins/group/auto/Start 101.13
291 TestNetworkPlugins/group/kindnet/Start 89.6
292 TestStoppedBinaryUpgrade/MinikubeLogs 1.28
293 TestNetworkPlugins/group/calico/Start 104.03
294 TestPause/serial/SecondStartNoReconfiguration 59.87
295 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
296 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
297 TestNetworkPlugins/group/kindnet/NetCatPod 9.37
298 TestNetworkPlugins/group/auto/KubeletFlags 0.26
299 TestNetworkPlugins/group/auto/NetCatPod 11.26
300 TestNetworkPlugins/group/kindnet/DNS 0.19
301 TestNetworkPlugins/group/kindnet/Localhost 0.16
302 TestNetworkPlugins/group/kindnet/HairPin 0.15
303 TestNetworkPlugins/group/auto/DNS 0.2
304 TestNetworkPlugins/group/auto/Localhost 0.18
305 TestNetworkPlugins/group/auto/HairPin 0.15
306 TestPause/serial/Pause 1.05
307 TestPause/serial/VerifyStatus 0.33
308 TestPause/serial/Unpause 1.02
309 TestPause/serial/PauseAgain 1.13
310 TestPause/serial/DeletePaused 0.97
311 TestPause/serial/VerifyDeletedResources 0.76
312 TestNetworkPlugins/group/custom-flannel/Start 85.52
313 TestNetworkPlugins/group/flannel/Start 110.7
314 TestNetworkPlugins/group/bridge/Start 152.77
315 TestNetworkPlugins/group/calico/ControllerPod 6.01
316 TestNetworkPlugins/group/calico/KubeletFlags 0.23
317 TestNetworkPlugins/group/calico/NetCatPod 10.24
318 TestNetworkPlugins/group/calico/DNS 0.21
319 TestNetworkPlugins/group/calico/Localhost 0.18
320 TestNetworkPlugins/group/calico/HairPin 0.19
321 TestNetworkPlugins/group/enable-default-cni/Start 91.98
322 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
323 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.42
324 TestNetworkPlugins/group/custom-flannel/DNS 0.18
325 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
326 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
327 TestNetworkPlugins/group/flannel/ControllerPod 6.01
329 TestStartStop/group/old-k8s-version/serial/FirstStart 182.58
330 TestNetworkPlugins/group/flannel/KubeletFlags 0.25
331 TestNetworkPlugins/group/flannel/NetCatPod 10.25
332 TestNetworkPlugins/group/flannel/DNS 0.21
333 TestNetworkPlugins/group/flannel/Localhost 0.16
334 TestNetworkPlugins/group/flannel/HairPin 0.16
335 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
336 TestNetworkPlugins/group/enable-default-cni/NetCatPod 14.56
338 TestStartStop/group/no-preload/serial/FirstStart 129.07
339 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
340 TestNetworkPlugins/group/bridge/NetCatPod 10.27
341 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
342 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
343 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
344 TestNetworkPlugins/group/bridge/DNS 0.24
345 TestNetworkPlugins/group/bridge/Localhost 0.16
346 TestNetworkPlugins/group/bridge/HairPin 0.16
348 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 107.72
350 TestStartStop/group/newest-cni/serial/FirstStart 81.92
351 TestStartStop/group/newest-cni/serial/DeployApp 0
352 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.13
353 TestStartStop/group/newest-cni/serial/Stop 2.38
354 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
355 TestStartStop/group/newest-cni/serial/SecondStart 33.32
356 TestStartStop/group/no-preload/serial/DeployApp 7.38
357 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.4
358 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.25
359 TestStartStop/group/no-preload/serial/Stop 92.57
360 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.14
361 TestStartStop/group/default-k8s-diff-port/serial/Stop 92.56
362 TestStartStop/group/old-k8s-version/serial/DeployApp 7.52
363 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
364 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
365 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
366 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.12
367 TestStartStop/group/newest-cni/serial/Pause 2.79
368 TestStartStop/group/old-k8s-version/serial/Stop 92.57
370 TestStartStop/group/embed-certs/serial/FirstStart 61.2
371 TestStartStop/group/embed-certs/serial/DeployApp 8.33
372 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
373 TestStartStop/group/no-preload/serial/SecondStart 317.83
374 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.16
375 TestStartStop/group/embed-certs/serial/Stop 92.58
376 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
377 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 307.68
378 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.29
379 TestStartStop/group/old-k8s-version/serial/SecondStart 208.77
380 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.25
381 TestStartStop/group/embed-certs/serial/SecondStart 297.38
382 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
383 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
384 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
385 TestStartStop/group/old-k8s-version/serial/Pause 2.89
386 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
387 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 14.01
388 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
389 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
390 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.97
391 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
392 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
393 TestStartStop/group/no-preload/serial/Pause 2.93
394 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
395 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
396 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
397 TestStartStop/group/embed-certs/serial/Pause 2.87
x
+
TestDownloadOnly/v1.20.0/json-events (8.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-449432 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-449432 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (8.86309141s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-449432
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-449432: exit status 85 (73.358475ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-449432 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:58 UTC |          |
	|         | -p download-only-449432        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |          |
	|         | --container-runtime=containerd |                      |         |                |                     |          |
	|         | --driver=kvm2                  |                      |         |                |                     |          |
	|         | --container-runtime=containerd |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 19:58:00
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 19:58:00.577816  439939 out.go:291] Setting OutFile to fd 1 ...
	I0327 19:58:00.577953  439939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 19:58:00.577967  439939 out.go:304] Setting ErrFile to fd 2...
	I0327 19:58:00.577971  439939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 19:58:00.578183  439939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-432634/.minikube/bin
	W0327 19:58:00.578305  439939 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17735-432634/.minikube/config/config.json: open /home/jenkins/minikube-integration/17735-432634/.minikube/config/config.json: no such file or directory
	I0327 19:58:00.578867  439939 out.go:298] Setting JSON to true
	I0327 19:58:00.579833  439939 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":13233,"bootTime":1711556248,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0327 19:58:00.579912  439939 start.go:139] virtualization: kvm guest
	I0327 19:58:00.582445  439939 out.go:97] [download-only-449432] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0327 19:58:00.583914  439939 out.go:169] MINIKUBE_LOCATION=17735
	W0327 19:58:00.582572  439939 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/17735-432634/.minikube/cache/preloaded-tarball: no such file or directory
	I0327 19:58:00.582653  439939 notify.go:220] Checking for updates...
	I0327 19:58:00.585480  439939 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 19:58:00.586916  439939 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17735-432634/kubeconfig
	I0327 19:58:00.588496  439939 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-432634/.minikube
	I0327 19:58:00.589858  439939 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0327 19:58:00.592255  439939 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0327 19:58:00.592578  439939 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 19:58:00.628551  439939 out.go:97] Using the kvm2 driver based on user configuration
	I0327 19:58:00.628580  439939 start.go:297] selected driver: kvm2
	I0327 19:58:00.628586  439939 start.go:901] validating driver "kvm2" against <nil>
	I0327 19:58:00.628922  439939 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 19:58:00.629009  439939 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17735-432634/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0327 19:58:00.646990  439939 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0327 19:58:00.647042  439939 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 19:58:00.647518  439939 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0327 19:58:00.647694  439939 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 19:58:00.647764  439939 cni.go:84] Creating CNI manager for ""
	I0327 19:58:00.647779  439939 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0327 19:58:00.647788  439939 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 19:58:00.647834  439939 start.go:340] cluster config:
	{Name:download-only-449432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-449432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cont
ainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 19:58:00.648008  439939 iso.go:125] acquiring lock: {Name:mk6bbc35a3ce9b9a38f627b62192ef8de7c8520d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 19:58:00.649802  439939 out.go:97] Downloading VM boot image ...
	I0327 19:58:00.649827  439939 download.go:107] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17735-432634/.minikube/cache/iso/amd64/minikube-v1.33.0-beta.0-amd64.iso
	I0327 19:58:03.560540  439939 out.go:97] Starting "download-only-449432" primary control-plane node in "download-only-449432" cluster
	I0327 19:58:03.560571  439939 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0327 19:58:03.581431  439939 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0327 19:58:03.581463  439939 cache.go:56] Caching tarball of preloaded images
	I0327 19:58:03.581605  439939 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0327 19:58:03.583188  439939 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0327 19:58:03.583202  439939 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0327 19:58:03.609474  439939 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/17735-432634/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-449432 host does not exist
	  To start a cluster, run: "minikube start -p download-only-449432"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-449432
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (5.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-819311 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-819311 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (5.036503246s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (5.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-819311
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-819311: exit status 85 (77.974438ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-449432 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:58 UTC |                     |
	|         | -p download-only-449432        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	|         | --driver=kvm2                  |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:58 UTC | 27 Mar 24 19:58 UTC |
	| delete  | -p download-only-449432        | download-only-449432 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:58 UTC | 27 Mar 24 19:58 UTC |
	| start   | -o=json --download-only        | download-only-819311 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:58 UTC |                     |
	|         | -p download-only-819311        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	|         | --driver=kvm2                  |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 19:58:09
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 19:58:09.808433  440114 out.go:291] Setting OutFile to fd 1 ...
	I0327 19:58:09.808641  440114 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 19:58:09.808653  440114 out.go:304] Setting ErrFile to fd 2...
	I0327 19:58:09.808660  440114 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 19:58:09.808912  440114 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-432634/.minikube/bin
	I0327 19:58:09.809524  440114 out.go:298] Setting JSON to true
	I0327 19:58:09.810493  440114 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":13242,"bootTime":1711556248,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0327 19:58:09.810594  440114 start.go:139] virtualization: kvm guest
	I0327 19:58:09.812896  440114 out.go:97] [download-only-819311] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0327 19:58:09.814417  440114 out.go:169] MINIKUBE_LOCATION=17735
	I0327 19:58:09.813130  440114 notify.go:220] Checking for updates...
	I0327 19:58:09.816908  440114 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 19:58:09.818359  440114 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17735-432634/kubeconfig
	I0327 19:58:09.819908  440114 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-432634/.minikube
	I0327 19:58:09.821278  440114 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-819311 host does not exist
	  To start a cluster, run: "minikube start -p download-only-819311"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-819311
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/json-events (7.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-952559 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-952559 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (7.864270514s)
--- PASS: TestDownloadOnly/v1.30.0-beta.0/json-events (7.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-952559
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-952559: exit status 85 (77.366634ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-449432 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:58 UTC |                     |
	|         | -p download-only-449432             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |                |                     |                     |
	|         | --container-runtime=containerd      |                      |         |                |                     |                     |
	|         | --driver=kvm2                       |                      |         |                |                     |                     |
	|         | --container-runtime=containerd      |                      |         |                |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:58 UTC | 27 Mar 24 19:58 UTC |
	| delete  | -p download-only-449432             | download-only-449432 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:58 UTC | 27 Mar 24 19:58 UTC |
	| start   | -o=json --download-only             | download-only-819311 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:58 UTC |                     |
	|         | -p download-only-819311             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3        |                      |         |                |                     |                     |
	|         | --container-runtime=containerd      |                      |         |                |                     |                     |
	|         | --driver=kvm2                       |                      |         |                |                     |                     |
	|         | --container-runtime=containerd      |                      |         |                |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:58 UTC | 27 Mar 24 19:58 UTC |
	| delete  | -p download-only-819311             | download-only-819311 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:58 UTC | 27 Mar 24 19:58 UTC |
	| start   | -o=json --download-only             | download-only-952559 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:58 UTC |                     |
	|         | -p download-only-952559             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0 |                      |         |                |                     |                     |
	|         | --container-runtime=containerd      |                      |         |                |                     |                     |
	|         | --driver=kvm2                       |                      |         |                |                     |                     |
	|         | --container-runtime=containerd      |                      |         |                |                     |                     |
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 19:58:15
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 19:58:15.206484  440271 out.go:291] Setting OutFile to fd 1 ...
	I0327 19:58:15.206785  440271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 19:58:15.206796  440271 out.go:304] Setting ErrFile to fd 2...
	I0327 19:58:15.206801  440271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 19:58:15.206980  440271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-432634/.minikube/bin
	I0327 19:58:15.207578  440271 out.go:298] Setting JSON to true
	I0327 19:58:15.208516  440271 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":13247,"bootTime":1711556248,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0327 19:58:15.208589  440271 start.go:139] virtualization: kvm guest
	I0327 19:58:15.210813  440271 out.go:97] [download-only-952559] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0327 19:58:15.211013  440271 notify.go:220] Checking for updates...
	I0327 19:58:15.212595  440271 out.go:169] MINIKUBE_LOCATION=17735
	I0327 19:58:15.214523  440271 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 19:58:15.216060  440271 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17735-432634/kubeconfig
	I0327 19:58:15.217272  440271 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-432634/.minikube
	I0327 19:58:15.218420  440271 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0327 19:58:15.220918  440271 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0327 19:58:15.221153  440271 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 19:58:15.254777  440271 out.go:97] Using the kvm2 driver based on user configuration
	I0327 19:58:15.254809  440271 start.go:297] selected driver: kvm2
	I0327 19:58:15.254815  440271 start.go:901] validating driver "kvm2" against <nil>
	I0327 19:58:15.255310  440271 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 19:58:15.255426  440271 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17735-432634/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0327 19:58:15.271354  440271 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0327 19:58:15.271435  440271 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 19:58:15.272205  440271 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0327 19:58:15.272393  440271 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 19:58:15.272464  440271 cni.go:84] Creating CNI manager for ""
	I0327 19:58:15.272482  440271 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0327 19:58:15.272495  440271 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 19:58:15.272568  440271 start.go:340] cluster config:
	{Name:download-only-952559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:download-only-952559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 19:58:15.272696  440271 iso.go:125] acquiring lock: {Name:mk6bbc35a3ce9b9a38f627b62192ef8de7c8520d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 19:58:15.274507  440271 out.go:97] Starting "download-only-952559" primary control-plane node in "download-only-952559" cluster
	I0327 19:58:15.274523  440271 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime containerd
	I0327 19:58:15.298305  440271 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-containerd-overlay2-amd64.tar.lz4
	I0327 19:58:15.298339  440271 cache.go:56] Caching tarball of preloaded images
	I0327 19:58:15.298507  440271 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime containerd
	I0327 19:58:15.300418  440271 out.go:97] Downloading Kubernetes v1.30.0-beta.0 preload ...
	I0327 19:58:15.300436  440271 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-beta.0-containerd-overlay2-amd64.tar.lz4 ...
	I0327 19:58:15.322964  440271 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:da32f15385f98142eac11fb4e1af2dd3 -> /home/jenkins/minikube-integration/17735-432634/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-containerd-overlay2-amd64.tar.lz4
	I0327 19:58:18.887994  440271 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-beta.0-containerd-overlay2-amd64.tar.lz4 ...
	I0327 19:58:18.888101  440271 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/17735-432634/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-containerd-overlay2-amd64.tar.lz4 ...
	I0327 19:58:19.636988  440271 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-beta.0 on containerd
	I0327 19:58:19.637342  440271 profile.go:143] Saving config to /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/download-only-952559/config.json ...
	I0327 19:58:19.637382  440271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/download-only-952559/config.json: {Name:mk4898d7ecea0334c736e9a85386d4d785e5f08e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 19:58:19.637547  440271 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime containerd
	I0327 19:58:19.637684  440271 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0-beta.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17735-432634/.minikube/cache/linux/amd64/v1.30.0-beta.0/kubectl
	
	
	* The control-plane node download-only-952559 host does not exist
	  To start a cluster, run: "minikube start -p download-only-952559"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-952559
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-799416 --alsologtostderr --binary-mirror http://127.0.0.1:42681 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-799416" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-799416
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (118.98s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-032736 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-032736 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m58.076812166s)
helpers_test.go:175: Cleaning up "offline-containerd-032736" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-032736
--- PASS: TestOffline (118.98s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-336680
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-336680: exit status 85 (63.882827ms)

                                                
                                                
-- stdout --
	* Profile "addons-336680" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-336680"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-336680
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-336680: exit status 85 (61.33919ms)

                                                
                                                
-- stdout --
	* Profile "addons-336680" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-336680"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (140.18s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-336680 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-336680 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m20.177246683s)
--- PASS: TestAddons/Setup (140.18s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 27.87192ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-zn5wr" [abe37412-af14-4e8a-8d12-cbfdf01a40f5] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.006179691s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-k9g6t" [23200993-9350-48c7-b5bc-0919a62fe952] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005009259s
addons_test.go:340: (dbg) Run:  kubectl --context addons-336680 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-336680 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-336680 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.925099734s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-336680 ip
2024/03/27 20:00:57 [DEBUG] GET http://192.168.39.8:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-336680 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.99s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-336680 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-336680 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-336680 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [5300d6c1-e931-4ebe-917f-45159286c7bd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [5300d6c1-e931-4ebe-917f-45159286c7bd] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.00503484s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-336680 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-336680 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-336680 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.8
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-336680 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-336680 addons disable ingress-dns --alsologtostderr -v=1: (1.103521361s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-336680 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-336680 addons disable ingress --alsologtostderr -v=1: (7.921406583s)
--- PASS: TestAddons/parallel/Ingress (18.29s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.29s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-7rckb" [ecdd264e-780a-4036-ab67-cce1829a7cf0] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005217174s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-336680
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-336680: (6.284672942s)
--- PASS: TestAddons/parallel/InspektorGadget (12.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.97s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 4.00975ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-rvmcc" [83cb6bf0-cd18-40bf-b1fb-75e6451a7a28] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.011858115s
addons_test.go:415: (dbg) Run:  kubectl --context addons-336680 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-336680 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.97s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (14.66s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 4.943105ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-lqfd5" [60633554-e5ef-4c06-b9d7-2d18c2890139] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.006214482s
addons_test.go:473: (dbg) Run:  kubectl --context addons-336680 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-336680 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.968850169s)
addons_test.go:478: kubectl --context addons-336680 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:473: (dbg) Run:  kubectl --context addons-336680 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-336680 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (2.320708134s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-336680 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (14.66s)

                                                
                                    
x
+
TestAddons/parallel/CSI (65.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 28.291198ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-336680 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-336680 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [abb91d7a-851a-4146-88c1-dbbfe2070c96] Pending
helpers_test.go:344: "task-pv-pod" [abb91d7a-851a-4146-88c1-dbbfe2070c96] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [abb91d7a-851a-4146-88c1-dbbfe2070c96] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.00514242s
addons_test.go:584: (dbg) Run:  kubectl --context addons-336680 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-336680 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-336680 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-336680 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-336680 delete pod task-pv-pod: (1.473563508s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-336680 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-336680 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-336680 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [04cf52f8-ea6e-447f-b229-8ededa6c08de] Pending
helpers_test.go:344: "task-pv-pod-restore" [04cf52f8-ea6e-447f-b229-8ededa6c08de] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [04cf52f8-ea6e-447f-b229-8ededa6c08de] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004133588s
addons_test.go:626: (dbg) Run:  kubectl --context addons-336680 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-336680 delete pod task-pv-pod-restore: (1.695870764s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-336680 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-336680 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-336680 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-336680 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.899681825s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-336680 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (65.25s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-336680 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-336680 --alsologtostderr -v=1: (1.046944318s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5485c556b-sr8zx" [7a796bc5-b3fb-4341-9619-2472a7d4313f] Pending
helpers_test.go:344: "headlamp-5485c556b-sr8zx" [7a796bc5-b3fb-4341-9619-2472a7d4313f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5485c556b-sr8zx" [7a796bc5-b3fb-4341-9619-2472a7d4313f] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004884234s
--- PASS: TestAddons/parallel/Headlamp (14.05s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.29s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-336680 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-336680 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-336680 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [f98a2a9d-25f7-42f4-8d50-e5d3c0e60b19] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [f98a2a9d-25f7-42f4-8d50-e5d3c0e60b19] Running
helpers_test.go:344: "test-local-path" [f98a2a9d-25f7-42f4-8d50-e5d3c0e60b19] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [f98a2a9d-25f7-42f4-8d50-e5d3c0e60b19] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.00459399s
addons_test.go:891: (dbg) Run:  kubectl --context addons-336680 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-336680 ssh "cat /opt/local-path-provisioner/pvc-3cfedee6-16e7-4404-b203-2e35fc3cfb1a_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-336680 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-336680 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-336680 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-336680 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (44.328897247s)
--- PASS: TestAddons/parallel/LocalPath (56.29s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.55s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-4gdb7" [71741d27-b1da-43af-84bb-3908f2dce37f] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.010856921s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-336680
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.55s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-l7wcv" [0c06b4cd-054e-4308-ba13-13b49f886258] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004941614s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-336680 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-336680 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.8s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-336680
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-336680: (1m32.458322593s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-336680
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-336680
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-336680
--- PASS: TestAddons/StoppedEnableDisable (92.80s)

                                                
                                    
x
+
TestCertOptions (70.27s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-737634 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-737634 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m8.963472707s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-737634 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-737634 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-737634 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-737634" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-737634
--- PASS: TestCertOptions (70.27s)

                                                
                                    
x
+
TestCertExpiration (302.36s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-476472 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-476472 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m22.516802008s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-476472 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
E0327 21:00:44.579947  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-476472 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (38.842052817s)
helpers_test.go:175: Cleaning up "cert-expiration-476472" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-476472
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-476472: (1.000954221s)
--- PASS: TestCertExpiration (302.36s)

                                                
                                    
x
+
TestForceSystemdFlag (64.15s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-698156 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E0327 20:57:07.465030  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/functional-870702/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-698156 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m2.991292558s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-698156 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-698156" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-698156
--- PASS: TestForceSystemdFlag (64.15s)

                                                
                                    
x
+
TestForceSystemdEnv (49.75s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-178946 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-178946 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (48.585880258s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-178946 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-178946" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-178946
--- PASS: TestForceSystemdEnv (49.75s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.22s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.22s)

                                                
                                    
x
+
TestErrorSpam/setup (44.3s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-813018 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-813018 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-813018 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-813018 --driver=kvm2  --container-runtime=containerd: (44.295156786s)
--- PASS: TestErrorSpam/setup (44.30s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813018 --log_dir /tmp/nospam-813018 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813018 --log_dir /tmp/nospam-813018 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813018 --log_dir /tmp/nospam-813018 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813018 --log_dir /tmp/nospam-813018 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813018 --log_dir /tmp/nospam-813018 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813018 --log_dir /tmp/nospam-813018 status
--- PASS: TestErrorSpam/status (0.80s)

                                                
                                    
x
+
TestErrorSpam/pause (1.67s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813018 --log_dir /tmp/nospam-813018 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813018 --log_dir /tmp/nospam-813018 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813018 --log_dir /tmp/nospam-813018 pause
--- PASS: TestErrorSpam/pause (1.67s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.87s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813018 --log_dir /tmp/nospam-813018 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813018 --log_dir /tmp/nospam-813018 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813018 --log_dir /tmp/nospam-813018 unpause
--- PASS: TestErrorSpam/unpause (1.87s)

                                                
                                    
x
+
TestErrorSpam/stop (4.65s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813018 --log_dir /tmp/nospam-813018 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-813018 --log_dir /tmp/nospam-813018 stop: (1.522176973s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813018 --log_dir /tmp/nospam-813018 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-813018 --log_dir /tmp/nospam-813018 stop: (1.207551206s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813018 --log_dir /tmp/nospam-813018 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-813018 --log_dir /tmp/nospam-813018 stop: (1.920483696s)
--- PASS: TestErrorSpam/stop (4.65s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17735-432634/.minikube/files/etc/test/nested/copy/439928/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (61.01s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-870702 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-870702 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m1.013168622s)
--- PASS: TestFunctional/serial/StartWithProxy (61.01s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.84s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-870702 --alsologtostderr -v=8
E0327 20:05:44.580263  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.crt: no such file or directory
E0327 20:05:44.586287  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.crt: no such file or directory
E0327 20:05:44.596583  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.crt: no such file or directory
E0327 20:05:44.616931  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.crt: no such file or directory
E0327 20:05:44.657350  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.crt: no such file or directory
E0327 20:05:44.737724  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.crt: no such file or directory
E0327 20:05:44.898158  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.crt: no such file or directory
E0327 20:05:45.218845  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.crt: no such file or directory
E0327 20:05:45.859857  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.crt: no such file or directory
E0327 20:05:47.140799  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.crt: no such file or directory
E0327 20:05:49.701101  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.crt: no such file or directory
E0327 20:05:54.821619  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.crt: no such file or directory
E0327 20:06:05.062205  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-870702 --alsologtostderr -v=8: (41.834223332s)
functional_test.go:659: soft start took 41.834765137s for "functional-870702" cluster.
--- PASS: TestFunctional/serial/SoftStart (41.84s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-870702 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-870702 cache add registry.k8s.io/pause:3.1: (1.38480429s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-870702 cache add registry.k8s.io/pause:3.3: (1.375412908s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-870702 cache add registry.k8s.io/pause:latest: (1.293009059s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-870702 /tmp/TestFunctionalserialCacheCmdcacheadd_local254279813/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 cache add minikube-local-cache-test:functional-870702
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 cache delete minikube-local-cache-test:functional-870702
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-870702
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.92s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-870702 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (232.652236ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-870702 cache reload: (1.195308058s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.92s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 kubectl -- --context functional-870702 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-870702 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.95s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-870702 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0327 20:06:25.542981  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-870702 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.945083882s)
functional_test.go:757: restart took 35.945208635s for "functional-870702" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (35.95s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-870702 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-870702 logs: (1.580158931s)
--- PASS: TestFunctional/serial/LogsCmd (1.58s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 logs --file /tmp/TestFunctionalserialLogsFileCmd3269574028/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-870702 logs --file /tmp/TestFunctionalserialLogsFileCmd3269574028/001/logs.txt: (1.5651418s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.57s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.63s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-870702 apply -f testdata/invalidsvc.yaml
E0327 20:07:06.503843  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-870702
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-870702: exit status 115 (316.024974ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.8:31792 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-870702 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.63s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-870702 config get cpus: exit status 14 (76.926437ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-870702 config get cpus: exit status 14 (65.888885ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-870702 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-870702 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 447442: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.19s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-870702 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-870702 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (174.47922ms)

                                                
                                                
-- stdout --
	* [functional-870702] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17735
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17735-432634/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-432634/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 20:07:23.598978  446789 out.go:291] Setting OutFile to fd 1 ...
	I0327 20:07:23.599464  446789 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 20:07:23.599521  446789 out.go:304] Setting ErrFile to fd 2...
	I0327 20:07:23.599538  446789 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 20:07:23.600002  446789 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-432634/.minikube/bin
	I0327 20:07:23.601283  446789 out.go:298] Setting JSON to false
	I0327 20:07:23.602501  446789 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":13796,"bootTime":1711556248,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0327 20:07:23.602582  446789 start.go:139] virtualization: kvm guest
	I0327 20:07:23.604541  446789 out.go:177] * [functional-870702] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0327 20:07:23.606541  446789 out.go:177]   - MINIKUBE_LOCATION=17735
	I0327 20:07:23.606631  446789 notify.go:220] Checking for updates...
	I0327 20:07:23.608267  446789 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 20:07:23.610158  446789 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17735-432634/kubeconfig
	I0327 20:07:23.611684  446789 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-432634/.minikube
	I0327 20:07:23.613114  446789 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0327 20:07:23.614607  446789 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 20:07:23.616649  446789 config.go:182] Loaded profile config "functional-870702": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 20:07:23.617102  446789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 20:07:23.617165  446789 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 20:07:23.637510  446789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40421
	I0327 20:07:23.638067  446789 main.go:141] libmachine: () Calling .GetVersion
	I0327 20:07:23.638704  446789 main.go:141] libmachine: Using API Version  1
	I0327 20:07:23.638729  446789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 20:07:23.639169  446789 main.go:141] libmachine: () Calling .GetMachineName
	I0327 20:07:23.639419  446789 main.go:141] libmachine: (functional-870702) Calling .DriverName
	I0327 20:07:23.639749  446789 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 20:07:23.640230  446789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 20:07:23.640286  446789 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 20:07:23.656448  446789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44799
	I0327 20:07:23.656906  446789 main.go:141] libmachine: () Calling .GetVersion
	I0327 20:07:23.657481  446789 main.go:141] libmachine: Using API Version  1
	I0327 20:07:23.657510  446789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 20:07:23.657855  446789 main.go:141] libmachine: () Calling .GetMachineName
	I0327 20:07:23.658054  446789 main.go:141] libmachine: (functional-870702) Calling .DriverName
	I0327 20:07:23.695648  446789 out.go:177] * Using the kvm2 driver based on existing profile
	I0327 20:07:23.697056  446789 start.go:297] selected driver: kvm2
	I0327 20:07:23.697078  446789 start.go:901] validating driver "kvm2" against &{Name:functional-870702 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functio
nal-870702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.8 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 20:07:23.697244  446789 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 20:07:23.699793  446789 out.go:177] 
	W0327 20:07:23.701481  446789 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0327 20:07:23.703035  446789 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-870702 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-870702 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-870702 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (157.395051ms)

                                                
                                                
-- stdout --
	* [functional-870702] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17735
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17735-432634/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-432634/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 20:07:09.352962  446052 out.go:291] Setting OutFile to fd 1 ...
	I0327 20:07:09.353286  446052 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 20:07:09.353302  446052 out.go:304] Setting ErrFile to fd 2...
	I0327 20:07:09.353309  446052 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 20:07:09.353738  446052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-432634/.minikube/bin
	I0327 20:07:09.354489  446052 out.go:298] Setting JSON to false
	I0327 20:07:09.355859  446052 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":13781,"bootTime":1711556248,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0327 20:07:09.355961  446052 start.go:139] virtualization: kvm guest
	I0327 20:07:09.358461  446052 out.go:177] * [functional-870702] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	I0327 20:07:09.359982  446052 out.go:177]   - MINIKUBE_LOCATION=17735
	I0327 20:07:09.360049  446052 notify.go:220] Checking for updates...
	I0327 20:07:09.361598  446052 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 20:07:09.363057  446052 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17735-432634/kubeconfig
	I0327 20:07:09.364299  446052 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-432634/.minikube
	I0327 20:07:09.365592  446052 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0327 20:07:09.366950  446052 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 20:07:09.368661  446052 config.go:182] Loaded profile config "functional-870702": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 20:07:09.369033  446052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 20:07:09.369081  446052 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 20:07:09.385942  446052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34589
	I0327 20:07:09.386430  446052 main.go:141] libmachine: () Calling .GetVersion
	I0327 20:07:09.387007  446052 main.go:141] libmachine: Using API Version  1
	I0327 20:07:09.387039  446052 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 20:07:09.387440  446052 main.go:141] libmachine: () Calling .GetMachineName
	I0327 20:07:09.387656  446052 main.go:141] libmachine: (functional-870702) Calling .DriverName
	I0327 20:07:09.387951  446052 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 20:07:09.388305  446052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 20:07:09.388346  446052 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 20:07:09.404593  446052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42899
	I0327 20:07:09.405079  446052 main.go:141] libmachine: () Calling .GetVersion
	I0327 20:07:09.405608  446052 main.go:141] libmachine: Using API Version  1
	I0327 20:07:09.405639  446052 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 20:07:09.406008  446052 main.go:141] libmachine: () Calling .GetMachineName
	I0327 20:07:09.406225  446052 main.go:141] libmachine: (functional-870702) Calling .DriverName
	I0327 20:07:09.440284  446052 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0327 20:07:09.441721  446052 start.go:297] selected driver: kvm2
	I0327 20:07:09.441735  446052 start.go:901] validating driver "kvm2" against &{Name:functional-870702 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functio
nal-870702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.8 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 20:07:09.441904  446052 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 20:07:09.444095  446052 out.go:177] 
	W0327 20:07:09.445410  446052 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0327 20:07:09.446922  446052 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (41.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-870702 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-870702 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-6kv6r" [85fc9881-1121-47d1-9ae5-729b5f745054] Pending
helpers_test.go:344: "hello-node-connect-55497b8b78-6kv6r" [85fc9881-1121-47d1-9ae5-729b5f745054] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-6kv6r" [85fc9881-1121-47d1-9ae5-729b5f745054] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.018598876s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.8:31530
functional_test.go:1657: error fetching http://192.168.39.8:31530: Get "http://192.168.39.8:31530": dial tcp 192.168.39.8:31530: connect: connection refused
functional_test.go:1657: error fetching http://192.168.39.8:31530: Get "http://192.168.39.8:31530": dial tcp 192.168.39.8:31530: connect: connection refused
functional_test.go:1657: error fetching http://192.168.39.8:31530: Get "http://192.168.39.8:31530": dial tcp 192.168.39.8:31530: connect: connection refused
functional_test.go:1657: error fetching http://192.168.39.8:31530: Get "http://192.168.39.8:31530": dial tcp 192.168.39.8:31530: connect: connection refused
functional_test.go:1657: error fetching http://192.168.39.8:31530: Get "http://192.168.39.8:31530": dial tcp 192.168.39.8:31530: connect: connection refused
functional_test.go:1657: error fetching http://192.168.39.8:31530: Get "http://192.168.39.8:31530": dial tcp 192.168.39.8:31530: connect: connection refused
functional_test.go:1657: error fetching http://192.168.39.8:31530: Get "http://192.168.39.8:31530": dial tcp 192.168.39.8:31530: connect: connection refused
functional_test.go:1671: http://192.168.39.8:31530: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-6kv6r

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.8:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.8:31530
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (41.98s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [d81f90b5-3d14-4999-a42c-457a1b1b2ea5] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005533303s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-870702 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-870702 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-870702 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-870702 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-870702 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c9b5467a-ed2d-4ddd-b73b-d7b2296c1ad2] Pending
helpers_test.go:344: "sp-pod" [c9b5467a-ed2d-4ddd-b73b-d7b2296c1ad2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c9b5467a-ed2d-4ddd-b73b-d7b2296c1ad2] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.005291324s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-870702 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-870702 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-870702 delete -f testdata/storage-provisioner/pod.yaml: (1.293933781s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-870702 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f059c09a-c559-4071-9edd-80d23886a366] Pending
helpers_test.go:344: "sp-pod" [f059c09a-c559-4071-9edd-80d23886a366] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.006692642s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-870702 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.53s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 ssh -n functional-870702 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 cp functional-870702:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd978085235/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 ssh -n functional-870702 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 ssh -n functional-870702 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (37.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-870702 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-wshc5" [e03d9708-af72-4bdd-88c7-0c4114a4774a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-wshc5" [e03d9708-af72-4bdd-88c7-0c4114a4774a] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 29.005315983s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-870702 exec mysql-859648c796-wshc5 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-870702 exec mysql-859648c796-wshc5 -- mysql -ppassword -e "show databases;": exit status 1 (296.817953ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-870702 exec mysql-859648c796-wshc5 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-870702 exec mysql-859648c796-wshc5 -- mysql -ppassword -e "show databases;": exit status 1 (266.039307ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-870702 exec mysql-859648c796-wshc5 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-870702 exec mysql-859648c796-wshc5 -- mysql -ppassword -e "show databases;": exit status 1 (234.135277ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
2024/03/27 20:07:43 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1803: (dbg) Run:  kubectl --context functional-870702 exec mysql-859648c796-wshc5 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-870702 exec mysql-859648c796-wshc5 -- mysql -ppassword -e "show databases;": exit status 1 (195.277792ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-870702 exec mysql-859648c796-wshc5 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (37.36s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/439928/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 ssh "sudo cat /etc/test/nested/copy/439928/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/439928.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 ssh "sudo cat /etc/ssl/certs/439928.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/439928.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 ssh "sudo cat /usr/share/ca-certificates/439928.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/4399282.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 ssh "sudo cat /etc/ssl/certs/4399282.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/4399282.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 ssh "sudo cat /usr/share/ca-certificates/4399282.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-870702 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-870702 ssh "sudo systemctl is-active docker": exit status 1 (214.244515ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-870702 ssh "sudo systemctl is-active crio": exit status 1 (214.374363ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (15.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-870702 /tmp/TestFunctionalparallelMountCmdany-port586715196/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1711570027217811326" to /tmp/TestFunctionalparallelMountCmdany-port586715196/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1711570027217811326" to /tmp/TestFunctionalparallelMountCmdany-port586715196/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1711570027217811326" to /tmp/TestFunctionalparallelMountCmdany-port586715196/001/test-1711570027217811326
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-870702 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (320.080208ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 27 20:07 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 27 20:07 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 27 20:07 test-1711570027217811326
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 ssh cat /mount-9p/test-1711570027217811326
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-870702 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [886aa140-f30b-48a5-8151-d1f8f255c24a] Pending
helpers_test.go:344: "busybox-mount" [886aa140-f30b-48a5-8151-d1f8f255c24a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [886aa140-f30b-48a5-8151-d1f8f255c24a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [886aa140-f30b-48a5-8151-d1f8f255c24a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 13.006217037s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-870702 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-870702 /tmp/TestFunctionalparallelMountCmdany-port586715196/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (15.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-870702 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-870702 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-z75jb" [97e63bce-0c79-4622-95b1-3f75db758dd7] Pending
helpers_test.go:344: "hello-node-d7447cc7f-z75jb" [97e63bce-0c79-4622-95b1-3f75db758dd7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-z75jb" [97e63bce-0c79-4622-95b1-3f75db758dd7] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.00529886s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "254.006043ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "63.905226ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "254.271521ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "73.42322ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 service list -o json
functional_test.go:1490: Took "500.376415ms" to run "out/minikube-linux-amd64 -p functional-870702 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.8:31313
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.8:31313
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-870702 /tmp/TestFunctionalparallelMountCmdspecific-port3475345852/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-870702 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (262.04934ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-870702 /tmp/TestFunctionalparallelMountCmdspecific-port3475345852/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-870702 ssh "sudo umount -f /mount-9p": exit status 1 (245.201293ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-870702 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-870702 /tmp/TestFunctionalparallelMountCmdspecific-port3475345852/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-870702 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.29.3
registry.k8s.io/kube-proxy:v1.29.3
registry.k8s.io/kube-controller-manager:v1.29.3
registry.k8s.io/kube-apiserver:v1.29.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-870702
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-870702
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-870702 image ls --format short --alsologtostderr:
I0327 20:07:47.748065  447994 out.go:291] Setting OutFile to fd 1 ...
I0327 20:07:47.748174  447994 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 20:07:47.748185  447994 out.go:304] Setting ErrFile to fd 2...
I0327 20:07:47.748189  447994 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 20:07:47.748363  447994 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-432634/.minikube/bin
I0327 20:07:47.749715  447994 config.go:182] Loaded profile config "functional-870702": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0327 20:07:47.749908  447994 config.go:182] Loaded profile config "functional-870702": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0327 20:07:47.751052  447994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0327 20:07:47.751109  447994 main.go:141] libmachine: Launching plugin server for driver kvm2
I0327 20:07:47.766730  447994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40839
I0327 20:07:47.767480  447994 main.go:141] libmachine: () Calling .GetVersion
I0327 20:07:47.768259  447994 main.go:141] libmachine: Using API Version  1
I0327 20:07:47.768298  447994 main.go:141] libmachine: () Calling .SetConfigRaw
I0327 20:07:47.768756  447994 main.go:141] libmachine: () Calling .GetMachineName
I0327 20:07:47.769031  447994 main.go:141] libmachine: (functional-870702) Calling .GetState
I0327 20:07:47.771136  447994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0327 20:07:47.771186  447994 main.go:141] libmachine: Launching plugin server for driver kvm2
I0327 20:07:47.788577  447994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40283
I0327 20:07:47.789165  447994 main.go:141] libmachine: () Calling .GetVersion
I0327 20:07:47.789786  447994 main.go:141] libmachine: Using API Version  1
I0327 20:07:47.789812  447994 main.go:141] libmachine: () Calling .SetConfigRaw
I0327 20:07:47.790201  447994 main.go:141] libmachine: () Calling .GetMachineName
I0327 20:07:47.790447  447994 main.go:141] libmachine: (functional-870702) Calling .DriverName
I0327 20:07:47.790640  447994 ssh_runner.go:195] Run: systemctl --version
I0327 20:07:47.790665  447994 main.go:141] libmachine: (functional-870702) Calling .GetSSHHostname
I0327 20:07:47.794272  447994 main.go:141] libmachine: (functional-870702) DBG | domain functional-870702 has defined MAC address 52:54:00:e0:7d:4d in network mk-functional-870702
I0327 20:07:47.794688  447994 main.go:141] libmachine: (functional-870702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:7d:4d", ip: ""} in network mk-functional-870702: {Iface:virbr1 ExpiryTime:2024-03-27 21:04:48 +0000 UTC Type:0 Mac:52:54:00:e0:7d:4d Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:functional-870702 Clientid:01:52:54:00:e0:7d:4d}
I0327 20:07:47.794723  447994 main.go:141] libmachine: (functional-870702) DBG | domain functional-870702 has defined IP address 192.168.39.8 and MAC address 52:54:00:e0:7d:4d in network mk-functional-870702
I0327 20:07:47.794952  447994 main.go:141] libmachine: (functional-870702) Calling .GetSSHPort
I0327 20:07:47.795158  447994 main.go:141] libmachine: (functional-870702) Calling .GetSSHKeyPath
I0327 20:07:47.795325  447994 main.go:141] libmachine: (functional-870702) Calling .GetSSHUsername
I0327 20:07:47.795461  447994 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17735-432634/.minikube/machines/functional-870702/id_rsa Username:docker}
I0327 20:07:47.879312  447994 ssh_runner.go:195] Run: sudo crictl images --output json
I0327 20:07:47.924035  447994 main.go:141] libmachine: Making call to close driver server
I0327 20:07:47.924051  447994 main.go:141] libmachine: (functional-870702) Calling .Close
I0327 20:07:47.924374  447994 main.go:141] libmachine: Successfully made call to close driver server
I0327 20:07:47.924401  447994 main.go:141] libmachine: Making call to close connection to plugin binary
I0327 20:07:47.924408  447994 main.go:141] libmachine: Making call to close driver server
I0327 20:07:47.924416  447994 main.go:141] libmachine: (functional-870702) Calling .Close
I0327 20:07:47.924413  447994 main.go:141] libmachine: (functional-870702) DBG | Closing plugin on server side
I0327 20:07:47.924701  447994 main.go:141] libmachine: Successfully made call to close driver server
I0327 20:07:47.924720  447994 main.go:141] libmachine: Making call to close connection to plugin binary
I0327 20:07:47.924730  447994 main.go:141] libmachine: (functional-870702) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-870702 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/kube-scheduler              | v1.29.3            | sha256:8c390d | 18.6MB |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| docker.io/library/mysql                     | 5.7                | sha256:510733 | 138MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:cbb01a | 18.2MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| docker.io/kindest/kindnetd                  | v20240202-8f1494ea | sha256:4950bb | 27.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| registry.k8s.io/kube-controller-manager     | v1.29.3            | sha256:6052a2 | 33.5MB |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| docker.io/library/minikube-local-cache-test | functional-870702  | sha256:36dcfe | 991B   |
| gcr.io/google-containers/addon-resizer      | functional-870702  | sha256:ffd4cf | 10.8MB |
| registry.k8s.io/kube-apiserver              | v1.29.3            | sha256:39f995 | 35.1MB |
| registry.k8s.io/etcd                        | 3.5.12-0           | sha256:3861cf | 57.2MB |
| registry.k8s.io/kube-proxy                  | v1.29.3            | sha256:a1d263 | 28.4MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-870702 image ls --format table --alsologtostderr:
I0327 20:07:48.317090  448129 out.go:291] Setting OutFile to fd 1 ...
I0327 20:07:48.317401  448129 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 20:07:48.317413  448129 out.go:304] Setting ErrFile to fd 2...
I0327 20:07:48.317417  448129 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 20:07:48.317733  448129 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-432634/.minikube/bin
I0327 20:07:48.319356  448129 config.go:182] Loaded profile config "functional-870702": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0327 20:07:48.319574  448129 config.go:182] Loaded profile config "functional-870702": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0327 20:07:48.320707  448129 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0327 20:07:48.320764  448129 main.go:141] libmachine: Launching plugin server for driver kvm2
I0327 20:07:48.337017  448129 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34163
I0327 20:07:48.337532  448129 main.go:141] libmachine: () Calling .GetVersion
I0327 20:07:48.338182  448129 main.go:141] libmachine: Using API Version  1
I0327 20:07:48.338208  448129 main.go:141] libmachine: () Calling .SetConfigRaw
I0327 20:07:48.338663  448129 main.go:141] libmachine: () Calling .GetMachineName
I0327 20:07:48.338968  448129 main.go:141] libmachine: (functional-870702) Calling .GetState
I0327 20:07:48.341189  448129 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0327 20:07:48.341264  448129 main.go:141] libmachine: Launching plugin server for driver kvm2
I0327 20:07:48.357764  448129 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38083
I0327 20:07:48.358274  448129 main.go:141] libmachine: () Calling .GetVersion
I0327 20:07:48.358957  448129 main.go:141] libmachine: Using API Version  1
I0327 20:07:48.358977  448129 main.go:141] libmachine: () Calling .SetConfigRaw
I0327 20:07:48.359422  448129 main.go:141] libmachine: () Calling .GetMachineName
I0327 20:07:48.359716  448129 main.go:141] libmachine: (functional-870702) Calling .DriverName
I0327 20:07:48.360020  448129 ssh_runner.go:195] Run: systemctl --version
I0327 20:07:48.360064  448129 main.go:141] libmachine: (functional-870702) Calling .GetSSHHostname
I0327 20:07:48.363566  448129 main.go:141] libmachine: (functional-870702) DBG | domain functional-870702 has defined MAC address 52:54:00:e0:7d:4d in network mk-functional-870702
I0327 20:07:48.364003  448129 main.go:141] libmachine: (functional-870702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:7d:4d", ip: ""} in network mk-functional-870702: {Iface:virbr1 ExpiryTime:2024-03-27 21:04:48 +0000 UTC Type:0 Mac:52:54:00:e0:7d:4d Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:functional-870702 Clientid:01:52:54:00:e0:7d:4d}
I0327 20:07:48.364041  448129 main.go:141] libmachine: (functional-870702) DBG | domain functional-870702 has defined IP address 192.168.39.8 and MAC address 52:54:00:e0:7d:4d in network mk-functional-870702
I0327 20:07:48.364200  448129 main.go:141] libmachine: (functional-870702) Calling .GetSSHPort
I0327 20:07:48.364413  448129 main.go:141] libmachine: (functional-870702) Calling .GetSSHKeyPath
I0327 20:07:48.364609  448129 main.go:141] libmachine: (functional-870702) Calling .GetSSHUsername
I0327 20:07:48.364780  448129 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17735-432634/.minikube/machines/functional-870702/id_rsa Username:docker}
I0327 20:07:48.451412  448129 ssh_runner.go:195] Run: sudo crictl images --output json
I0327 20:07:48.522812  448129 main.go:141] libmachine: Making call to close driver server
I0327 20:07:48.522829  448129 main.go:141] libmachine: (functional-870702) Calling .Close
I0327 20:07:48.523168  448129 main.go:141] libmachine: Successfully made call to close driver server
I0327 20:07:48.523189  448129 main.go:141] libmachine: (functional-870702) DBG | Closing plugin on server side
I0327 20:07:48.523193  448129 main.go:141] libmachine: Making call to close connection to plugin binary
I0327 20:07:48.523204  448129 main.go:141] libmachine: Making call to close driver server
I0327 20:07:48.523213  448129 main.go:141] libmachine: (functional-870702) Calling .Close
I0327 20:07:48.523508  448129 main.go:141] libmachine: Successfully made call to close driver server
I0327 20:07:48.523521  448129 main.go:141] libmachine: Making call to close connection to plugin binary
I0327 20:07:48.523544  448129 main.go:141] libmachine: (functional-870702) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-870702 image ls --format json --alsologtostderr:
[{"id":"sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a"],"repoTags":["registry.k8s.io/kube-scheduler:v1.29.3"],"size":"18553260"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d
7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-870702"],"size":"10823156"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:36dcfec44ff2b327dc1fe837bcc41393a0d642660cc80194299b5bba91632568","repoDigests":[],"repoTags":["docker.io/library/minikube-
local-cache-test:functional-870702"],"size":"991"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"27755257"},{"id":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"18182961"},{"id":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcb
ba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"57236178"},{"id":"sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533","repoDigests":["registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.29.3"],"size":"35100536"},{"id":"sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.29.3"],"size":"33466661"},{"id":"sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392","repoDigests":["registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"],"repoTags":["registry.k8s.io/kube-proxy:v1.29.3"],"size":"28398741"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe
50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-870702 image ls --format json --alsologtostderr:
I0327 20:07:48.069642  448071 out.go:291] Setting OutFile to fd 1 ...
I0327 20:07:48.069787  448071 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 20:07:48.069799  448071 out.go:304] Setting ErrFile to fd 2...
I0327 20:07:48.069806  448071 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 20:07:48.070040  448071 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-432634/.minikube/bin
I0327 20:07:48.070686  448071 config.go:182] Loaded profile config "functional-870702": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0327 20:07:48.070830  448071 config.go:182] Loaded profile config "functional-870702": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0327 20:07:48.071326  448071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0327 20:07:48.071388  448071 main.go:141] libmachine: Launching plugin server for driver kvm2
I0327 20:07:48.087435  448071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42721
I0327 20:07:48.087992  448071 main.go:141] libmachine: () Calling .GetVersion
I0327 20:07:48.088633  448071 main.go:141] libmachine: Using API Version  1
I0327 20:07:48.088655  448071 main.go:141] libmachine: () Calling .SetConfigRaw
I0327 20:07:48.089074  448071 main.go:141] libmachine: () Calling .GetMachineName
I0327 20:07:48.089366  448071 main.go:141] libmachine: (functional-870702) Calling .GetState
I0327 20:07:48.091306  448071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0327 20:07:48.091362  448071 main.go:141] libmachine: Launching plugin server for driver kvm2
I0327 20:07:48.107113  448071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33509
I0327 20:07:48.107698  448071 main.go:141] libmachine: () Calling .GetVersion
I0327 20:07:48.108261  448071 main.go:141] libmachine: Using API Version  1
I0327 20:07:48.108290  448071 main.go:141] libmachine: () Calling .SetConfigRaw
I0327 20:07:48.108683  448071 main.go:141] libmachine: () Calling .GetMachineName
I0327 20:07:48.108933  448071 main.go:141] libmachine: (functional-870702) Calling .DriverName
I0327 20:07:48.109158  448071 ssh_runner.go:195] Run: systemctl --version
I0327 20:07:48.109203  448071 main.go:141] libmachine: (functional-870702) Calling .GetSSHHostname
I0327 20:07:48.113159  448071 main.go:141] libmachine: (functional-870702) DBG | domain functional-870702 has defined MAC address 52:54:00:e0:7d:4d in network mk-functional-870702
I0327 20:07:48.113566  448071 main.go:141] libmachine: (functional-870702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:7d:4d", ip: ""} in network mk-functional-870702: {Iface:virbr1 ExpiryTime:2024-03-27 21:04:48 +0000 UTC Type:0 Mac:52:54:00:e0:7d:4d Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:functional-870702 Clientid:01:52:54:00:e0:7d:4d}
I0327 20:07:48.113597  448071 main.go:141] libmachine: (functional-870702) DBG | domain functional-870702 has defined IP address 192.168.39.8 and MAC address 52:54:00:e0:7d:4d in network mk-functional-870702
I0327 20:07:48.113870  448071 main.go:141] libmachine: (functional-870702) Calling .GetSSHPort
I0327 20:07:48.114087  448071 main.go:141] libmachine: (functional-870702) Calling .GetSSHKeyPath
I0327 20:07:48.114262  448071 main.go:141] libmachine: (functional-870702) Calling .GetSSHUsername
I0327 20:07:48.114472  448071 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17735-432634/.minikube/machines/functional-870702/id_rsa Username:docker}
I0327 20:07:48.203477  448071 ssh_runner.go:195] Run: sudo crictl images --output json
I0327 20:07:48.252688  448071 main.go:141] libmachine: Making call to close driver server
I0327 20:07:48.252700  448071 main.go:141] libmachine: (functional-870702) Calling .Close
I0327 20:07:48.253004  448071 main.go:141] libmachine: (functional-870702) DBG | Closing plugin on server side
I0327 20:07:48.253004  448071 main.go:141] libmachine: Successfully made call to close driver server
I0327 20:07:48.253038  448071 main.go:141] libmachine: Making call to close connection to plugin binary
I0327 20:07:48.253051  448071 main.go:141] libmachine: Making call to close driver server
I0327 20:07:48.253062  448071 main.go:141] libmachine: (functional-870702) Calling .Close
I0327 20:07:48.253333  448071 main.go:141] libmachine: (functional-870702) DBG | Closing plugin on server side
I0327 20:07:48.253344  448071 main.go:141] libmachine: Successfully made call to close driver server
I0327 20:07:48.253364  448071 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-870702 image ls --format yaml --alsologtostderr:
- id: sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "18182961"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-870702
size: "10823156"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:36dcfec44ff2b327dc1fe837bcc41393a0d642660cc80194299b5bba91632568
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-870702
size: "991"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c
repoTags:
- registry.k8s.io/kube-apiserver:v1.29.3
size: "35100536"
- id: sha256:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "27755257"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104
repoTags:
- registry.k8s.io/kube-controller-manager:v1.29.3
size: "33466661"
- id: sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a
repoTags:
- registry.k8s.io/kube-scheduler:v1.29.3
size: "18553260"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "57236178"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392
repoDigests:
- registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863
repoTags:
- registry.k8s.io/kube-proxy:v1.29.3
size: "28398741"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-870702 image ls --format yaml --alsologtostderr:
I0327 20:07:47.822437  448023 out.go:291] Setting OutFile to fd 1 ...
I0327 20:07:47.822691  448023 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 20:07:47.822700  448023 out.go:304] Setting ErrFile to fd 2...
I0327 20:07:47.822704  448023 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 20:07:47.822917  448023 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-432634/.minikube/bin
I0327 20:07:47.823501  448023 config.go:182] Loaded profile config "functional-870702": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0327 20:07:47.823639  448023 config.go:182] Loaded profile config "functional-870702": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0327 20:07:47.824029  448023 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0327 20:07:47.824068  448023 main.go:141] libmachine: Launching plugin server for driver kvm2
I0327 20:07:47.841073  448023 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35389
I0327 20:07:47.841651  448023 main.go:141] libmachine: () Calling .GetVersion
I0327 20:07:47.842280  448023 main.go:141] libmachine: Using API Version  1
I0327 20:07:47.842306  448023 main.go:141] libmachine: () Calling .SetConfigRaw
I0327 20:07:47.842677  448023 main.go:141] libmachine: () Calling .GetMachineName
I0327 20:07:47.842903  448023 main.go:141] libmachine: (functional-870702) Calling .GetState
I0327 20:07:47.844848  448023 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0327 20:07:47.844886  448023 main.go:141] libmachine: Launching plugin server for driver kvm2
I0327 20:07:47.860152  448023 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41287
I0327 20:07:47.860595  448023 main.go:141] libmachine: () Calling .GetVersion
I0327 20:07:47.861122  448023 main.go:141] libmachine: Using API Version  1
I0327 20:07:47.861154  448023 main.go:141] libmachine: () Calling .SetConfigRaw
I0327 20:07:47.861536  448023 main.go:141] libmachine: () Calling .GetMachineName
I0327 20:07:47.861772  448023 main.go:141] libmachine: (functional-870702) Calling .DriverName
I0327 20:07:47.861995  448023 ssh_runner.go:195] Run: systemctl --version
I0327 20:07:47.862021  448023 main.go:141] libmachine: (functional-870702) Calling .GetSSHHostname
I0327 20:07:47.864907  448023 main.go:141] libmachine: (functional-870702) DBG | domain functional-870702 has defined MAC address 52:54:00:e0:7d:4d in network mk-functional-870702
I0327 20:07:47.865309  448023 main.go:141] libmachine: (functional-870702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:7d:4d", ip: ""} in network mk-functional-870702: {Iface:virbr1 ExpiryTime:2024-03-27 21:04:48 +0000 UTC Type:0 Mac:52:54:00:e0:7d:4d Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:functional-870702 Clientid:01:52:54:00:e0:7d:4d}
I0327 20:07:47.865344  448023 main.go:141] libmachine: (functional-870702) DBG | domain functional-870702 has defined IP address 192.168.39.8 and MAC address 52:54:00:e0:7d:4d in network mk-functional-870702
I0327 20:07:47.865495  448023 main.go:141] libmachine: (functional-870702) Calling .GetSSHPort
I0327 20:07:47.865684  448023 main.go:141] libmachine: (functional-870702) Calling .GetSSHKeyPath
I0327 20:07:47.865860  448023 main.go:141] libmachine: (functional-870702) Calling .GetSSHUsername
I0327 20:07:47.866005  448023 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17735-432634/.minikube/machines/functional-870702/id_rsa Username:docker}
I0327 20:07:47.952178  448023 ssh_runner.go:195] Run: sudo crictl images --output json
I0327 20:07:48.001208  448023 main.go:141] libmachine: Making call to close driver server
I0327 20:07:48.001228  448023 main.go:141] libmachine: (functional-870702) Calling .Close
I0327 20:07:48.001636  448023 main.go:141] libmachine: (functional-870702) DBG | Closing plugin on server side
I0327 20:07:48.001700  448023 main.go:141] libmachine: Successfully made call to close driver server
I0327 20:07:48.001713  448023 main.go:141] libmachine: Making call to close connection to plugin binary
I0327 20:07:48.001723  448023 main.go:141] libmachine: Making call to close driver server
I0327 20:07:48.001735  448023 main.go:141] libmachine: (functional-870702) Calling .Close
I0327 20:07:48.002034  448023 main.go:141] libmachine: Successfully made call to close driver server
I0327 20:07:48.002055  448023 main.go:141] libmachine: Making call to close connection to plugin binary
I0327 20:07:48.002074  448023 main.go:141] libmachine: (functional-870702) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-870702 ssh pgrep buildkitd: exit status 1 (219.036961ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 image build -t localhost/my-image:functional-870702 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-870702 image build -t localhost/my-image:functional-870702 testdata/build --alsologtostderr: (2.522310815s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-870702 image build -t localhost/my-image:functional-870702 testdata/build --alsologtostderr:
I0327 20:07:48.206887  448105 out.go:291] Setting OutFile to fd 1 ...
I0327 20:07:48.207182  448105 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 20:07:48.207193  448105 out.go:304] Setting ErrFile to fd 2...
I0327 20:07:48.207198  448105 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 20:07:48.207440  448105 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-432634/.minikube/bin
I0327 20:07:48.208132  448105 config.go:182] Loaded profile config "functional-870702": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0327 20:07:48.208677  448105 config.go:182] Loaded profile config "functional-870702": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0327 20:07:48.209098  448105 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0327 20:07:48.209139  448105 main.go:141] libmachine: Launching plugin server for driver kvm2
I0327 20:07:48.225658  448105 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34437
I0327 20:07:48.226250  448105 main.go:141] libmachine: () Calling .GetVersion
I0327 20:07:48.226924  448105 main.go:141] libmachine: Using API Version  1
I0327 20:07:48.226954  448105 main.go:141] libmachine: () Calling .SetConfigRaw
I0327 20:07:48.227408  448105 main.go:141] libmachine: () Calling .GetMachineName
I0327 20:07:48.227658  448105 main.go:141] libmachine: (functional-870702) Calling .GetState
I0327 20:07:48.229810  448105 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0327 20:07:48.229848  448105 main.go:141] libmachine: Launching plugin server for driver kvm2
I0327 20:07:48.245988  448105 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33507
I0327 20:07:48.246694  448105 main.go:141] libmachine: () Calling .GetVersion
I0327 20:07:48.247211  448105 main.go:141] libmachine: Using API Version  1
I0327 20:07:48.247238  448105 main.go:141] libmachine: () Calling .SetConfigRaw
I0327 20:07:48.247648  448105 main.go:141] libmachine: () Calling .GetMachineName
I0327 20:07:48.247861  448105 main.go:141] libmachine: (functional-870702) Calling .DriverName
I0327 20:07:48.248144  448105 ssh_runner.go:195] Run: systemctl --version
I0327 20:07:48.248176  448105 main.go:141] libmachine: (functional-870702) Calling .GetSSHHostname
I0327 20:07:48.251244  448105 main.go:141] libmachine: (functional-870702) DBG | domain functional-870702 has defined MAC address 52:54:00:e0:7d:4d in network mk-functional-870702
I0327 20:07:48.251730  448105 main.go:141] libmachine: (functional-870702) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:7d:4d", ip: ""} in network mk-functional-870702: {Iface:virbr1 ExpiryTime:2024-03-27 21:04:48 +0000 UTC Type:0 Mac:52:54:00:e0:7d:4d Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:functional-870702 Clientid:01:52:54:00:e0:7d:4d}
I0327 20:07:48.251756  448105 main.go:141] libmachine: (functional-870702) DBG | domain functional-870702 has defined IP address 192.168.39.8 and MAC address 52:54:00:e0:7d:4d in network mk-functional-870702
I0327 20:07:48.251884  448105 main.go:141] libmachine: (functional-870702) Calling .GetSSHPort
I0327 20:07:48.252084  448105 main.go:141] libmachine: (functional-870702) Calling .GetSSHKeyPath
I0327 20:07:48.252278  448105 main.go:141] libmachine: (functional-870702) Calling .GetSSHUsername
I0327 20:07:48.252499  448105 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17735-432634/.minikube/machines/functional-870702/id_rsa Username:docker}
I0327 20:07:48.345881  448105 build_images.go:161] Building image from path: /tmp/build.1090851620.tar
I0327 20:07:48.345938  448105 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0327 20:07:48.360728  448105 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1090851620.tar
I0327 20:07:48.369924  448105 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1090851620.tar: stat -c "%s %y" /var/lib/minikube/build/build.1090851620.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1090851620.tar': No such file or directory
I0327 20:07:48.369971  448105 ssh_runner.go:362] scp /tmp/build.1090851620.tar --> /var/lib/minikube/build/build.1090851620.tar (3072 bytes)
I0327 20:07:48.408282  448105 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1090851620
I0327 20:07:48.421477  448105 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1090851620 -xf /var/lib/minikube/build/build.1090851620.tar
I0327 20:07:48.435917  448105 containerd.go:394] Building image: /var/lib/minikube/build/build.1090851620
I0327 20:07:48.435990  448105 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1090851620 --local dockerfile=/var/lib/minikube/build/build.1090851620 --output type=image,name=localhost/my-image:functional-870702
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.1s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:746624be2d894ea1e0397cf77cf642c73a709bb1e475c1803c8ad15522087224
#8 exporting manifest sha256:746624be2d894ea1e0397cf77cf642c73a709bb1e475c1803c8ad15522087224 0.0s done
#8 exporting config sha256:6e14e722371ea3abab53daa3f88c139cd5b783af4646ebd83e975b879cb81cb9 0.0s done
#8 naming to localhost/my-image:functional-870702 done
#8 DONE 0.2s
I0327 20:07:50.634896  448105 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1090851620 --local dockerfile=/var/lib/minikube/build/build.1090851620 --output type=image,name=localhost/my-image:functional-870702: (2.1988667s)
I0327 20:07:50.635002  448105 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1090851620
I0327 20:07:50.652151  448105 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1090851620.tar
I0327 20:07:50.666441  448105 build_images.go:217] Built localhost/my-image:functional-870702 from /tmp/build.1090851620.tar
I0327 20:07:50.666479  448105 build_images.go:133] succeeded building to: functional-870702
I0327 20:07:50.666483  448105 build_images.go:134] failed building to: 
I0327 20:07:50.666508  448105 main.go:141] libmachine: Making call to close driver server
I0327 20:07:50.666523  448105 main.go:141] libmachine: (functional-870702) Calling .Close
I0327 20:07:50.666837  448105 main.go:141] libmachine: Successfully made call to close driver server
I0327 20:07:50.666859  448105 main.go:141] libmachine: Making call to close connection to plugin binary
I0327 20:07:50.666868  448105 main.go:141] libmachine: Making call to close driver server
I0327 20:07:50.666878  448105 main.go:141] libmachine: (functional-870702) Calling .Close
I0327 20:07:50.667102  448105 main.go:141] libmachine: Successfully made call to close driver server
I0327 20:07:50.667113  448105 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.030023233s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-870702
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-870702 /tmp/TestFunctionalparallelMountCmdVerifyCleanup849815540/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-870702 /tmp/TestFunctionalparallelMountCmdVerifyCleanup849815540/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-870702 /tmp/TestFunctionalparallelMountCmdVerifyCleanup849815540/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-870702 ssh "findmnt -T" /mount1: exit status 1 (363.988046ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-870702 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-870702 /tmp/TestFunctionalparallelMountCmdVerifyCleanup849815540/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-870702 /tmp/TestFunctionalparallelMountCmdVerifyCleanup849815540/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-870702 /tmp/TestFunctionalparallelMountCmdVerifyCleanup849815540/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (7.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 image load --daemon gcr.io/google-containers/addon-resizer:functional-870702 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-870702 image load --daemon gcr.io/google-containers/addon-resizer:functional-870702 --alsologtostderr: (6.825111s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (7.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 image load --daemon gcr.io/google-containers/addon-resizer:functional-870702 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-870702 image load --daemon gcr.io/google-containers/addon-resizer:functional-870702 --alsologtostderr: (3.23131092s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.145320355s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-870702
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 image load --daemon gcr.io/google-containers/addon-resizer:functional-870702 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-870702 image load --daemon gcr.io/google-containers/addon-resizer:functional-870702 --alsologtostderr: (4.869383412s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 image save gcr.io/google-containers/addon-resizer:functional-870702 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-870702 image save gcr.io/google-containers/addon-resizer:functional-870702 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.359607691s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 image rm gcr.io/google-containers/addon-resizer:functional-870702 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-870702 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.580493286s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-870702
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-870702 image save --daemon gcr.io/google-containers/addon-resizer:functional-870702 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-870702 image save --daemon gcr.io/google-containers/addon-resizer:functional-870702 --alsologtostderr: (1.111644497s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-870702
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-870702
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-870702
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-870702
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (199.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-713439 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0327 20:08:28.424053  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.crt: no such file or directory
E0327 20:10:44.580983  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.crt: no such file or directory
E0327 20:11:12.266101  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-713439 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (3m19.092345352s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (199.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-713439 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-713439 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-713439 -- rollout status deployment/busybox: (4.448519324s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-713439 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-713439 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-713439 -- exec busybox-7fdf7869d9-4smjz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-713439 -- exec busybox-7fdf7869d9-5pftv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-713439 -- exec busybox-7fdf7869d9-pnv4f -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-713439 -- exec busybox-7fdf7869d9-4smjz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-713439 -- exec busybox-7fdf7869d9-5pftv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-713439 -- exec busybox-7fdf7869d9-pnv4f -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-713439 -- exec busybox-7fdf7869d9-4smjz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-713439 -- exec busybox-7fdf7869d9-5pftv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-713439 -- exec busybox-7fdf7869d9-pnv4f -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-713439 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-713439 -- exec busybox-7fdf7869d9-4smjz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-713439 -- exec busybox-7fdf7869d9-4smjz -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-713439 -- exec busybox-7fdf7869d9-5pftv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-713439 -- exec busybox-7fdf7869d9-5pftv -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-713439 -- exec busybox-7fdf7869d9-pnv4f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-713439 -- exec busybox-7fdf7869d9-pnv4f -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (43.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-713439 -v=7 --alsologtostderr
E0327 20:12:07.465617  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/functional-870702/client.crt: no such file or directory
E0327 20:12:07.471029  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/functional-870702/client.crt: no such file or directory
E0327 20:12:07.481348  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/functional-870702/client.crt: no such file or directory
E0327 20:12:07.501729  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/functional-870702/client.crt: no such file or directory
E0327 20:12:07.542066  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/functional-870702/client.crt: no such file or directory
E0327 20:12:07.622417  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/functional-870702/client.crt: no such file or directory
E0327 20:12:07.782823  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/functional-870702/client.crt: no such file or directory
E0327 20:12:08.103996  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/functional-870702/client.crt: no such file or directory
E0327 20:12:08.744312  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/functional-870702/client.crt: no such file or directory
E0327 20:12:10.024748  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/functional-870702/client.crt: no such file or directory
E0327 20:12:12.585558  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/functional-870702/client.crt: no such file or directory
E0327 20:12:17.706063  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/functional-870702/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-713439 -v=7 --alsologtostderr: (42.240139692s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (43.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-713439 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (14.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 cp testdata/cp-test.txt ha-713439:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 ssh -n ha-713439 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 cp ha-713439:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile366273105/001/cp-test_ha-713439.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 ssh -n ha-713439 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 cp ha-713439:/home/docker/cp-test.txt ha-713439-m02:/home/docker/cp-test_ha-713439_ha-713439-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 ssh -n ha-713439 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 ssh -n ha-713439-m02 "sudo cat /home/docker/cp-test_ha-713439_ha-713439-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 cp ha-713439:/home/docker/cp-test.txt ha-713439-m03:/home/docker/cp-test_ha-713439_ha-713439-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 ssh -n ha-713439 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 ssh -n ha-713439-m03 "sudo cat /home/docker/cp-test_ha-713439_ha-713439-m03.txt"
E0327 20:12:27.947133  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/functional-870702/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 cp ha-713439:/home/docker/cp-test.txt ha-713439-m04:/home/docker/cp-test_ha-713439_ha-713439-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 ssh -n ha-713439 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 ssh -n ha-713439-m04 "sudo cat /home/docker/cp-test_ha-713439_ha-713439-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 cp testdata/cp-test.txt ha-713439-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 ssh -n ha-713439-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 cp ha-713439-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile366273105/001/cp-test_ha-713439-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 ssh -n ha-713439-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 cp ha-713439-m02:/home/docker/cp-test.txt ha-713439:/home/docker/cp-test_ha-713439-m02_ha-713439.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 ssh -n ha-713439-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 ssh -n ha-713439 "sudo cat /home/docker/cp-test_ha-713439-m02_ha-713439.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 cp ha-713439-m02:/home/docker/cp-test.txt ha-713439-m03:/home/docker/cp-test_ha-713439-m02_ha-713439-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 ssh -n ha-713439-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 ssh -n ha-713439-m03 "sudo cat /home/docker/cp-test_ha-713439-m02_ha-713439-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 cp ha-713439-m02:/home/docker/cp-test.txt ha-713439-m04:/home/docker/cp-test_ha-713439-m02_ha-713439-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 ssh -n ha-713439-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 ssh -n ha-713439-m04 "sudo cat /home/docker/cp-test_ha-713439-m02_ha-713439-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 cp testdata/cp-test.txt ha-713439-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 ssh -n ha-713439-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 cp ha-713439-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile366273105/001/cp-test_ha-713439-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 ssh -n ha-713439-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 cp ha-713439-m03:/home/docker/cp-test.txt ha-713439:/home/docker/cp-test_ha-713439-m03_ha-713439.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 ssh -n ha-713439-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 ssh -n ha-713439 "sudo cat /home/docker/cp-test_ha-713439-m03_ha-713439.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 cp ha-713439-m03:/home/docker/cp-test.txt ha-713439-m02:/home/docker/cp-test_ha-713439-m03_ha-713439-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 ssh -n ha-713439-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 ssh -n ha-713439-m02 "sudo cat /home/docker/cp-test_ha-713439-m03_ha-713439-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 cp ha-713439-m03:/home/docker/cp-test.txt ha-713439-m04:/home/docker/cp-test_ha-713439-m03_ha-713439-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 ssh -n ha-713439-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 ssh -n ha-713439-m04 "sudo cat /home/docker/cp-test_ha-713439-m03_ha-713439-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 cp testdata/cp-test.txt ha-713439-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 ssh -n ha-713439-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 cp ha-713439-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile366273105/001/cp-test_ha-713439-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 ssh -n ha-713439-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 cp ha-713439-m04:/home/docker/cp-test.txt ha-713439:/home/docker/cp-test_ha-713439-m04_ha-713439.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 ssh -n ha-713439-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 ssh -n ha-713439 "sudo cat /home/docker/cp-test_ha-713439-m04_ha-713439.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 cp ha-713439-m04:/home/docker/cp-test.txt ha-713439-m02:/home/docker/cp-test_ha-713439-m04_ha-713439-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 ssh -n ha-713439-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 ssh -n ha-713439-m02 "sudo cat /home/docker/cp-test_ha-713439-m04_ha-713439-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 cp ha-713439-m04:/home/docker/cp-test.txt ha-713439-m03:/home/docker/cp-test_ha-713439-m04_ha-713439-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 ssh -n ha-713439-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 ssh -n ha-713439-m03 "sudo cat /home/docker/cp-test_ha-713439-m04_ha-713439-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (14.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (93.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 node stop m02 -v=7 --alsologtostderr
E0327 20:12:48.427339  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/functional-870702/client.crt: no such file or directory
E0327 20:13:29.387755  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/functional-870702/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-713439 node stop m02 -v=7 --alsologtostderr: (1m32.480651694s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-713439 status -v=7 --alsologtostderr: exit status 7 (717.569184ms)

                                                
                                                
-- stdout --
	ha-713439
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-713439-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-713439-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-713439-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 20:14:11.620377  452312 out.go:291] Setting OutFile to fd 1 ...
	I0327 20:14:11.620534  452312 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 20:14:11.620548  452312 out.go:304] Setting ErrFile to fd 2...
	I0327 20:14:11.620555  452312 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 20:14:11.620785  452312 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-432634/.minikube/bin
	I0327 20:14:11.620993  452312 out.go:298] Setting JSON to false
	I0327 20:14:11.621027  452312 mustload.go:65] Loading cluster: ha-713439
	I0327 20:14:11.621154  452312 notify.go:220] Checking for updates...
	I0327 20:14:11.621535  452312 config.go:182] Loaded profile config "ha-713439": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 20:14:11.621556  452312 status.go:255] checking status of ha-713439 ...
	I0327 20:14:11.621931  452312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 20:14:11.622027  452312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 20:14:11.638294  452312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38143
	I0327 20:14:11.638744  452312 main.go:141] libmachine: () Calling .GetVersion
	I0327 20:14:11.639541  452312 main.go:141] libmachine: Using API Version  1
	I0327 20:14:11.639570  452312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 20:14:11.639945  452312 main.go:141] libmachine: () Calling .GetMachineName
	I0327 20:14:11.640144  452312 main.go:141] libmachine: (ha-713439) Calling .GetState
	I0327 20:14:11.641699  452312 status.go:330] ha-713439 host status = "Running" (err=<nil>)
	I0327 20:14:11.641721  452312 host.go:66] Checking if "ha-713439" exists ...
	I0327 20:14:11.642159  452312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 20:14:11.642214  452312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 20:14:11.657630  452312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39215
	I0327 20:14:11.658157  452312 main.go:141] libmachine: () Calling .GetVersion
	I0327 20:14:11.658770  452312 main.go:141] libmachine: Using API Version  1
	I0327 20:14:11.658801  452312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 20:14:11.659143  452312 main.go:141] libmachine: () Calling .GetMachineName
	I0327 20:14:11.659328  452312 main.go:141] libmachine: (ha-713439) Calling .GetIP
	I0327 20:14:11.661992  452312 main.go:141] libmachine: (ha-713439) DBG | domain ha-713439 has defined MAC address 52:54:00:34:7a:b0 in network mk-ha-713439
	I0327 20:14:11.662391  452312 main.go:141] libmachine: (ha-713439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:7a:b0", ip: ""} in network mk-ha-713439: {Iface:virbr1 ExpiryTime:2024-03-27 21:08:28 +0000 UTC Type:0 Mac:52:54:00:34:7a:b0 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-713439 Clientid:01:52:54:00:34:7a:b0}
	I0327 20:14:11.662429  452312 main.go:141] libmachine: (ha-713439) DBG | domain ha-713439 has defined IP address 192.168.39.122 and MAC address 52:54:00:34:7a:b0 in network mk-ha-713439
	I0327 20:14:11.662554  452312 host.go:66] Checking if "ha-713439" exists ...
	I0327 20:14:11.662847  452312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 20:14:11.662884  452312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 20:14:11.678815  452312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44195
	I0327 20:14:11.679286  452312 main.go:141] libmachine: () Calling .GetVersion
	I0327 20:14:11.679803  452312 main.go:141] libmachine: Using API Version  1
	I0327 20:14:11.679829  452312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 20:14:11.680228  452312 main.go:141] libmachine: () Calling .GetMachineName
	I0327 20:14:11.680457  452312 main.go:141] libmachine: (ha-713439) Calling .DriverName
	I0327 20:14:11.680700  452312 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0327 20:14:11.680736  452312 main.go:141] libmachine: (ha-713439) Calling .GetSSHHostname
	I0327 20:14:11.684147  452312 main.go:141] libmachine: (ha-713439) DBG | domain ha-713439 has defined MAC address 52:54:00:34:7a:b0 in network mk-ha-713439
	I0327 20:14:11.684701  452312 main.go:141] libmachine: (ha-713439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:7a:b0", ip: ""} in network mk-ha-713439: {Iface:virbr1 ExpiryTime:2024-03-27 21:08:28 +0000 UTC Type:0 Mac:52:54:00:34:7a:b0 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-713439 Clientid:01:52:54:00:34:7a:b0}
	I0327 20:14:11.684732  452312 main.go:141] libmachine: (ha-713439) DBG | domain ha-713439 has defined IP address 192.168.39.122 and MAC address 52:54:00:34:7a:b0 in network mk-ha-713439
	I0327 20:14:11.684916  452312 main.go:141] libmachine: (ha-713439) Calling .GetSSHPort
	I0327 20:14:11.685117  452312 main.go:141] libmachine: (ha-713439) Calling .GetSSHKeyPath
	I0327 20:14:11.685277  452312 main.go:141] libmachine: (ha-713439) Calling .GetSSHUsername
	I0327 20:14:11.685478  452312 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17735-432634/.minikube/machines/ha-713439/id_rsa Username:docker}
	I0327 20:14:11.775668  452312 ssh_runner.go:195] Run: systemctl --version
	I0327 20:14:11.784833  452312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 20:14:11.804392  452312 kubeconfig.go:125] found "ha-713439" server: "https://192.168.39.254:8443"
	I0327 20:14:11.804440  452312 api_server.go:166] Checking apiserver status ...
	I0327 20:14:11.804497  452312 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 20:14:11.825730  452312 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1232/cgroup
	W0327 20:14:11.838642  452312 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1232/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0327 20:14:11.838706  452312 ssh_runner.go:195] Run: ls
	I0327 20:14:11.844106  452312 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0327 20:14:11.848897  452312 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0327 20:14:11.848921  452312 status.go:422] ha-713439 apiserver status = Running (err=<nil>)
	I0327 20:14:11.848936  452312 status.go:257] ha-713439 status: &{Name:ha-713439 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0327 20:14:11.848961  452312 status.go:255] checking status of ha-713439-m02 ...
	I0327 20:14:11.849390  452312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 20:14:11.849437  452312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 20:14:11.868090  452312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35421
	I0327 20:14:11.868579  452312 main.go:141] libmachine: () Calling .GetVersion
	I0327 20:14:11.869162  452312 main.go:141] libmachine: Using API Version  1
	I0327 20:14:11.869183  452312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 20:14:11.869563  452312 main.go:141] libmachine: () Calling .GetMachineName
	I0327 20:14:11.869801  452312 main.go:141] libmachine: (ha-713439-m02) Calling .GetState
	I0327 20:14:11.871811  452312 status.go:330] ha-713439-m02 host status = "Stopped" (err=<nil>)
	I0327 20:14:11.871825  452312 status.go:343] host is not running, skipping remaining checks
	I0327 20:14:11.871832  452312 status.go:257] ha-713439-m02 status: &{Name:ha-713439-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0327 20:14:11.871850  452312 status.go:255] checking status of ha-713439-m03 ...
	I0327 20:14:11.872274  452312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 20:14:11.872329  452312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 20:14:11.889397  452312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40509
	I0327 20:14:11.889878  452312 main.go:141] libmachine: () Calling .GetVersion
	I0327 20:14:11.890411  452312 main.go:141] libmachine: Using API Version  1
	I0327 20:14:11.890449  452312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 20:14:11.890805  452312 main.go:141] libmachine: () Calling .GetMachineName
	I0327 20:14:11.891016  452312 main.go:141] libmachine: (ha-713439-m03) Calling .GetState
	I0327 20:14:11.892748  452312 status.go:330] ha-713439-m03 host status = "Running" (err=<nil>)
	I0327 20:14:11.892765  452312 host.go:66] Checking if "ha-713439-m03" exists ...
	I0327 20:14:11.893049  452312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 20:14:11.893085  452312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 20:14:11.908528  452312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43769
	I0327 20:14:11.909020  452312 main.go:141] libmachine: () Calling .GetVersion
	I0327 20:14:11.909543  452312 main.go:141] libmachine: Using API Version  1
	I0327 20:14:11.909570  452312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 20:14:11.909853  452312 main.go:141] libmachine: () Calling .GetMachineName
	I0327 20:14:11.910022  452312 main.go:141] libmachine: (ha-713439-m03) Calling .GetIP
	I0327 20:14:11.912931  452312 main.go:141] libmachine: (ha-713439-m03) DBG | domain ha-713439-m03 has defined MAC address 52:54:00:c7:b8:11 in network mk-ha-713439
	I0327 20:14:11.913396  452312 main.go:141] libmachine: (ha-713439-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:b8:11", ip: ""} in network mk-ha-713439: {Iface:virbr1 ExpiryTime:2024-03-27 21:10:39 +0000 UTC Type:0 Mac:52:54:00:c7:b8:11 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-713439-m03 Clientid:01:52:54:00:c7:b8:11}
	I0327 20:14:11.913434  452312 main.go:141] libmachine: (ha-713439-m03) DBG | domain ha-713439-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:c7:b8:11 in network mk-ha-713439
	I0327 20:14:11.913731  452312 host.go:66] Checking if "ha-713439-m03" exists ...
	I0327 20:14:11.914137  452312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 20:14:11.914212  452312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 20:14:11.931257  452312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33151
	I0327 20:14:11.931819  452312 main.go:141] libmachine: () Calling .GetVersion
	I0327 20:14:11.932332  452312 main.go:141] libmachine: Using API Version  1
	I0327 20:14:11.932357  452312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 20:14:11.932764  452312 main.go:141] libmachine: () Calling .GetMachineName
	I0327 20:14:11.932985  452312 main.go:141] libmachine: (ha-713439-m03) Calling .DriverName
	I0327 20:14:11.933226  452312 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0327 20:14:11.933263  452312 main.go:141] libmachine: (ha-713439-m03) Calling .GetSSHHostname
	I0327 20:14:11.935937  452312 main.go:141] libmachine: (ha-713439-m03) DBG | domain ha-713439-m03 has defined MAC address 52:54:00:c7:b8:11 in network mk-ha-713439
	I0327 20:14:11.936318  452312 main.go:141] libmachine: (ha-713439-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:b8:11", ip: ""} in network mk-ha-713439: {Iface:virbr1 ExpiryTime:2024-03-27 21:10:39 +0000 UTC Type:0 Mac:52:54:00:c7:b8:11 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-713439-m03 Clientid:01:52:54:00:c7:b8:11}
	I0327 20:14:11.936347  452312 main.go:141] libmachine: (ha-713439-m03) DBG | domain ha-713439-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:c7:b8:11 in network mk-ha-713439
	I0327 20:14:11.936432  452312 main.go:141] libmachine: (ha-713439-m03) Calling .GetSSHPort
	I0327 20:14:11.936607  452312 main.go:141] libmachine: (ha-713439-m03) Calling .GetSSHKeyPath
	I0327 20:14:11.936780  452312 main.go:141] libmachine: (ha-713439-m03) Calling .GetSSHUsername
	I0327 20:14:11.936935  452312 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17735-432634/.minikube/machines/ha-713439-m03/id_rsa Username:docker}
	I0327 20:14:12.027053  452312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 20:14:12.049928  452312 kubeconfig.go:125] found "ha-713439" server: "https://192.168.39.254:8443"
	I0327 20:14:12.049960  452312 api_server.go:166] Checking apiserver status ...
	I0327 20:14:12.050002  452312 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 20:14:12.068640  452312 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1271/cgroup
	W0327 20:14:12.084451  452312 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1271/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0327 20:14:12.084530  452312 ssh_runner.go:195] Run: ls
	I0327 20:14:12.091310  452312 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0327 20:14:12.095940  452312 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0327 20:14:12.095965  452312 status.go:422] ha-713439-m03 apiserver status = Running (err=<nil>)
	I0327 20:14:12.095976  452312 status.go:257] ha-713439-m03 status: &{Name:ha-713439-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0327 20:14:12.095999  452312 status.go:255] checking status of ha-713439-m04 ...
	I0327 20:14:12.096463  452312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 20:14:12.096511  452312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 20:14:12.111958  452312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40293
	I0327 20:14:12.112414  452312 main.go:141] libmachine: () Calling .GetVersion
	I0327 20:14:12.112851  452312 main.go:141] libmachine: Using API Version  1
	I0327 20:14:12.112870  452312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 20:14:12.113225  452312 main.go:141] libmachine: () Calling .GetMachineName
	I0327 20:14:12.113417  452312 main.go:141] libmachine: (ha-713439-m04) Calling .GetState
	I0327 20:14:12.114866  452312 status.go:330] ha-713439-m04 host status = "Running" (err=<nil>)
	I0327 20:14:12.114884  452312 host.go:66] Checking if "ha-713439-m04" exists ...
	I0327 20:14:12.115204  452312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 20:14:12.115247  452312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 20:14:12.130524  452312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37221
	I0327 20:14:12.131034  452312 main.go:141] libmachine: () Calling .GetVersion
	I0327 20:14:12.131477  452312 main.go:141] libmachine: Using API Version  1
	I0327 20:14:12.131497  452312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 20:14:12.131857  452312 main.go:141] libmachine: () Calling .GetMachineName
	I0327 20:14:12.132045  452312 main.go:141] libmachine: (ha-713439-m04) Calling .GetIP
	I0327 20:14:12.135076  452312 main.go:141] libmachine: (ha-713439-m04) DBG | domain ha-713439-m04 has defined MAC address 52:54:00:b0:2c:0b in network mk-ha-713439
	I0327 20:14:12.135537  452312 main.go:141] libmachine: (ha-713439-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2c:0b", ip: ""} in network mk-ha-713439: {Iface:virbr1 ExpiryTime:2024-03-27 21:11:56 +0000 UTC Type:0 Mac:52:54:00:b0:2c:0b Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-713439-m04 Clientid:01:52:54:00:b0:2c:0b}
	I0327 20:14:12.135568  452312 main.go:141] libmachine: (ha-713439-m04) DBG | domain ha-713439-m04 has defined IP address 192.168.39.95 and MAC address 52:54:00:b0:2c:0b in network mk-ha-713439
	I0327 20:14:12.135776  452312 host.go:66] Checking if "ha-713439-m04" exists ...
	I0327 20:14:12.136079  452312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 20:14:12.136118  452312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 20:14:12.150910  452312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40625
	I0327 20:14:12.151351  452312 main.go:141] libmachine: () Calling .GetVersion
	I0327 20:14:12.151889  452312 main.go:141] libmachine: Using API Version  1
	I0327 20:14:12.151913  452312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 20:14:12.152316  452312 main.go:141] libmachine: () Calling .GetMachineName
	I0327 20:14:12.152523  452312 main.go:141] libmachine: (ha-713439-m04) Calling .DriverName
	I0327 20:14:12.152777  452312 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0327 20:14:12.152802  452312 main.go:141] libmachine: (ha-713439-m04) Calling .GetSSHHostname
	I0327 20:14:12.155296  452312 main.go:141] libmachine: (ha-713439-m04) DBG | domain ha-713439-m04 has defined MAC address 52:54:00:b0:2c:0b in network mk-ha-713439
	I0327 20:14:12.155742  452312 main.go:141] libmachine: (ha-713439-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2c:0b", ip: ""} in network mk-ha-713439: {Iface:virbr1 ExpiryTime:2024-03-27 21:11:56 +0000 UTC Type:0 Mac:52:54:00:b0:2c:0b Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-713439-m04 Clientid:01:52:54:00:b0:2c:0b}
	I0327 20:14:12.155775  452312 main.go:141] libmachine: (ha-713439-m04) DBG | domain ha-713439-m04 has defined IP address 192.168.39.95 and MAC address 52:54:00:b0:2c:0b in network mk-ha-713439
	I0327 20:14:12.155957  452312 main.go:141] libmachine: (ha-713439-m04) Calling .GetSSHPort
	I0327 20:14:12.156118  452312 main.go:141] libmachine: (ha-713439-m04) Calling .GetSSHKeyPath
	I0327 20:14:12.156283  452312 main.go:141] libmachine: (ha-713439-m04) Calling .GetSSHUsername
	I0327 20:14:12.156398  452312 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17735-432634/.minikube/machines/ha-713439-m04/id_rsa Username:docker}
	I0327 20:14:12.249404  452312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 20:14:12.268424  452312 status.go:257] ha-713439-m04 status: &{Name:ha-713439-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (93.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (44.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 node start m02 -v=7 --alsologtostderr
E0327 20:14:51.308812  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/functional-870702/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-713439 node start m02 -v=7 --alsologtostderr: (43.742125083s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (44.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (440.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-713439 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-713439 -v=7 --alsologtostderr
E0327 20:15:44.580430  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.crt: no such file or directory
E0327 20:17:07.465476  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/functional-870702/client.crt: no such file or directory
E0327 20:17:35.149603  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/functional-870702/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-713439 -v=7 --alsologtostderr: (4m37.835135771s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-713439 --wait=true -v=7 --alsologtostderr
E0327 20:20:44.580453  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.crt: no such file or directory
E0327 20:22:07.465635  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/functional-870702/client.crt: no such file or directory
E0327 20:22:07.627392  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-713439 --wait=true -v=7 --alsologtostderr: (2m42.256364103s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-713439
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (440.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-713439 node delete m03 -v=7 --alsologtostderr: (7.162918782s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (276.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 stop -v=7 --alsologtostderr
E0327 20:25:44.580923  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-713439 stop -v=7 --alsologtostderr: (4m36.409460523s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-713439 status -v=7 --alsologtostderr: exit status 7 (126.650215ms)

                                                
                                                
-- stdout --
	ha-713439
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-713439-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-713439-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 20:27:03.078100  455763 out.go:291] Setting OutFile to fd 1 ...
	I0327 20:27:03.078240  455763 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 20:27:03.078245  455763 out.go:304] Setting ErrFile to fd 2...
	I0327 20:27:03.078249  455763 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 20:27:03.078465  455763 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-432634/.minikube/bin
	I0327 20:27:03.078663  455763 out.go:298] Setting JSON to false
	I0327 20:27:03.078690  455763 mustload.go:65] Loading cluster: ha-713439
	I0327 20:27:03.078749  455763 notify.go:220] Checking for updates...
	I0327 20:27:03.079108  455763 config.go:182] Loaded profile config "ha-713439": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 20:27:03.079144  455763 status.go:255] checking status of ha-713439 ...
	I0327 20:27:03.079555  455763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 20:27:03.079648  455763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 20:27:03.100251  455763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34969
	I0327 20:27:03.100772  455763 main.go:141] libmachine: () Calling .GetVersion
	I0327 20:27:03.101388  455763 main.go:141] libmachine: Using API Version  1
	I0327 20:27:03.101412  455763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 20:27:03.101804  455763 main.go:141] libmachine: () Calling .GetMachineName
	I0327 20:27:03.102063  455763 main.go:141] libmachine: (ha-713439) Calling .GetState
	I0327 20:27:03.103764  455763 status.go:330] ha-713439 host status = "Stopped" (err=<nil>)
	I0327 20:27:03.103784  455763 status.go:343] host is not running, skipping remaining checks
	I0327 20:27:03.103793  455763 status.go:257] ha-713439 status: &{Name:ha-713439 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0327 20:27:03.103821  455763 status.go:255] checking status of ha-713439-m02 ...
	I0327 20:27:03.104121  455763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 20:27:03.104165  455763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 20:27:03.119568  455763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43521
	I0327 20:27:03.120067  455763 main.go:141] libmachine: () Calling .GetVersion
	I0327 20:27:03.120579  455763 main.go:141] libmachine: Using API Version  1
	I0327 20:27:03.120600  455763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 20:27:03.120904  455763 main.go:141] libmachine: () Calling .GetMachineName
	I0327 20:27:03.121092  455763 main.go:141] libmachine: (ha-713439-m02) Calling .GetState
	I0327 20:27:03.122827  455763 status.go:330] ha-713439-m02 host status = "Stopped" (err=<nil>)
	I0327 20:27:03.122843  455763 status.go:343] host is not running, skipping remaining checks
	I0327 20:27:03.122849  455763 status.go:257] ha-713439-m02 status: &{Name:ha-713439-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0327 20:27:03.122880  455763 status.go:255] checking status of ha-713439-m04 ...
	I0327 20:27:03.123164  455763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 20:27:03.123200  455763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 20:27:03.139543  455763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43753
	I0327 20:27:03.140034  455763 main.go:141] libmachine: () Calling .GetVersion
	I0327 20:27:03.140548  455763 main.go:141] libmachine: Using API Version  1
	I0327 20:27:03.140571  455763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 20:27:03.140928  455763 main.go:141] libmachine: () Calling .GetMachineName
	I0327 20:27:03.141160  455763 main.go:141] libmachine: (ha-713439-m04) Calling .GetState
	I0327 20:27:03.142701  455763 status.go:330] ha-713439-m04 host status = "Stopped" (err=<nil>)
	I0327 20:27:03.142718  455763 status.go:343] host is not running, skipping remaining checks
	I0327 20:27:03.142726  455763 status.go:257] ha-713439-m04 status: &{Name:ha-713439-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (276.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (150.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-713439 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0327 20:27:07.464958  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/functional-870702/client.crt: no such file or directory
E0327 20:28:30.510175  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/functional-870702/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-713439 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m29.814391209s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (150.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (71.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-713439 --control-plane -v=7 --alsologtostderr
E0327 20:30:44.580572  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-713439 --control-plane -v=7 --alsologtostderr: (1m10.556378239s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-713439 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (71.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.61s)

                                                
                                    
x
+
TestJSONOutput/start/Command (100.04s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-606348 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E0327 20:32:07.467662  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/functional-870702/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-606348 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m40.039790678s)
--- PASS: TestJSONOutput/start/Command (100.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-606348 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-606348 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.38s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-606348 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-606348 --output=json --user=testUser: (7.374932853s)
--- PASS: TestJSONOutput/stop/Command (7.38s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-646325 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-646325 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (80.941413ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"36663c77-691c-4caf-8a3f-b9e076813381","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-646325] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"edaac28a-42e1-4262-b4b4-470fff92588a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17735"}}
	{"specversion":"1.0","id":"b43e8cac-3449-495a-8cb9-9e5d42c1b77a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"da18f990-f880-4483-bf0e-25a745153c6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17735-432634/kubeconfig"}}
	{"specversion":"1.0","id":"18ebdad4-f559-4d7c-9344-2aa5c6fdaf5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-432634/.minikube"}}
	{"specversion":"1.0","id":"18258efd-6dba-4f05-bb9d-4f4b5b040020","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ac309ce5-8f5c-4730-8144-276c80dca4bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4fc4a22a-94b1-468d-bd29-4b6fd4c2688a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-646325" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-646325
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (95.94s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-529371 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-529371 --driver=kvm2  --container-runtime=containerd: (48.833419706s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-531760 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-531760 --driver=kvm2  --container-runtime=containerd: (44.636946604s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-529371
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-531760
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-531760" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-531760
helpers_test.go:175: Cleaning up "first-529371" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-529371
--- PASS: TestMinikubeProfile (95.94s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.06s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-152616 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-152616 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.063118889s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-152616 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-152616 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.33s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-168219 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-168219 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (26.326500576s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-168219 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-168219 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.77s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-152616 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.77s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-168219 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-168219 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.43s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-168219
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-168219: (1.314590924s)
--- PASS: TestMountStart/serial/Stop (1.31s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.73s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-168219
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-168219: (20.726704526s)
--- PASS: TestMountStart/serial/RestartStopped (21.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-168219 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-168219 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (98.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-571739 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0327 20:35:44.580821  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.crt: no such file or directory
E0327 20:37:07.465201  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/functional-870702/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-571739 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m38.47509085s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (98.92s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-571739 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-571739 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-571739 -- rollout status deployment/busybox: (2.476264379s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-571739 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-571739 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-571739 -- exec busybox-7fdf7869d9-7dvjp -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-571739 -- exec busybox-7fdf7869d9-lwfsf -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-571739 -- exec busybox-7fdf7869d9-7dvjp -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-571739 -- exec busybox-7fdf7869d9-lwfsf -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-571739 -- exec busybox-7fdf7869d9-7dvjp -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-571739 -- exec busybox-7fdf7869d9-lwfsf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.16s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-571739 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-571739 -- exec busybox-7fdf7869d9-7dvjp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-571739 -- exec busybox-7fdf7869d9-7dvjp -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-571739 -- exec busybox-7fdf7869d9-lwfsf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-571739 -- exec busybox-7fdf7869d9-lwfsf -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-571739 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-571739 -v 3 --alsologtostderr: (40.804495101s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.42s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-571739 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 cp testdata/cp-test.txt multinode-571739:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 ssh -n multinode-571739 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 cp multinode-571739:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile458395162/001/cp-test_multinode-571739.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 ssh -n multinode-571739 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 cp multinode-571739:/home/docker/cp-test.txt multinode-571739-m02:/home/docker/cp-test_multinode-571739_multinode-571739-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 ssh -n multinode-571739 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 ssh -n multinode-571739-m02 "sudo cat /home/docker/cp-test_multinode-571739_multinode-571739-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 cp multinode-571739:/home/docker/cp-test.txt multinode-571739-m03:/home/docker/cp-test_multinode-571739_multinode-571739-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 ssh -n multinode-571739 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 ssh -n multinode-571739-m03 "sudo cat /home/docker/cp-test_multinode-571739_multinode-571739-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 cp testdata/cp-test.txt multinode-571739-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 ssh -n multinode-571739-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 cp multinode-571739-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile458395162/001/cp-test_multinode-571739-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 ssh -n multinode-571739-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 cp multinode-571739-m02:/home/docker/cp-test.txt multinode-571739:/home/docker/cp-test_multinode-571739-m02_multinode-571739.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 ssh -n multinode-571739-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 ssh -n multinode-571739 "sudo cat /home/docker/cp-test_multinode-571739-m02_multinode-571739.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 cp multinode-571739-m02:/home/docker/cp-test.txt multinode-571739-m03:/home/docker/cp-test_multinode-571739-m02_multinode-571739-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 ssh -n multinode-571739-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 ssh -n multinode-571739-m03 "sudo cat /home/docker/cp-test_multinode-571739-m02_multinode-571739-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 cp testdata/cp-test.txt multinode-571739-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 ssh -n multinode-571739-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 cp multinode-571739-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile458395162/001/cp-test_multinode-571739-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 ssh -n multinode-571739-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 cp multinode-571739-m03:/home/docker/cp-test.txt multinode-571739:/home/docker/cp-test_multinode-571739-m03_multinode-571739.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 ssh -n multinode-571739-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 ssh -n multinode-571739 "sudo cat /home/docker/cp-test_multinode-571739-m03_multinode-571739.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 cp multinode-571739-m03:/home/docker/cp-test.txt multinode-571739-m02:/home/docker/cp-test_multinode-571739-m03_multinode-571739-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 ssh -n multinode-571739-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 ssh -n multinode-571739-m02 "sudo cat /home/docker/cp-test_multinode-571739-m03_multinode-571739-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.70s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-571739 node stop m03: (1.418143672s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-571739 status: exit status 7 (453.89379ms)

                                                
                                                
-- stdout --
	multinode-571739
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-571739-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-571739-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-571739 status --alsologtostderr: exit status 7 (444.895233ms)

                                                
                                                
-- stdout --
	multinode-571739
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-571739-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-571739-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 20:38:11.516051  462394 out.go:291] Setting OutFile to fd 1 ...
	I0327 20:38:11.516186  462394 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 20:38:11.516196  462394 out.go:304] Setting ErrFile to fd 2...
	I0327 20:38:11.516202  462394 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 20:38:11.516847  462394 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-432634/.minikube/bin
	I0327 20:38:11.517091  462394 out.go:298] Setting JSON to false
	I0327 20:38:11.517119  462394 mustload.go:65] Loading cluster: multinode-571739
	I0327 20:38:11.517207  462394 notify.go:220] Checking for updates...
	I0327 20:38:11.517473  462394 config.go:182] Loaded profile config "multinode-571739": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 20:38:11.517488  462394 status.go:255] checking status of multinode-571739 ...
	I0327 20:38:11.517889  462394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 20:38:11.517971  462394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 20:38:11.533412  462394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35431
	I0327 20:38:11.533918  462394 main.go:141] libmachine: () Calling .GetVersion
	I0327 20:38:11.534533  462394 main.go:141] libmachine: Using API Version  1
	I0327 20:38:11.534556  462394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 20:38:11.534940  462394 main.go:141] libmachine: () Calling .GetMachineName
	I0327 20:38:11.535219  462394 main.go:141] libmachine: (multinode-571739) Calling .GetState
	I0327 20:38:11.536922  462394 status.go:330] multinode-571739 host status = "Running" (err=<nil>)
	I0327 20:38:11.536944  462394 host.go:66] Checking if "multinode-571739" exists ...
	I0327 20:38:11.537263  462394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 20:38:11.537299  462394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 20:38:11.553406  462394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38535
	I0327 20:38:11.553788  462394 main.go:141] libmachine: () Calling .GetVersion
	I0327 20:38:11.554241  462394 main.go:141] libmachine: Using API Version  1
	I0327 20:38:11.554264  462394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 20:38:11.554571  462394 main.go:141] libmachine: () Calling .GetMachineName
	I0327 20:38:11.554739  462394 main.go:141] libmachine: (multinode-571739) Calling .GetIP
	I0327 20:38:11.557290  462394 main.go:141] libmachine: (multinode-571739) DBG | domain multinode-571739 has defined MAC address 52:54:00:02:a2:3c in network mk-multinode-571739
	I0327 20:38:11.557669  462394 main.go:141] libmachine: (multinode-571739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:a2:3c", ip: ""} in network mk-multinode-571739: {Iface:virbr1 ExpiryTime:2024-03-27 21:35:51 +0000 UTC Type:0 Mac:52:54:00:02:a2:3c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-571739 Clientid:01:52:54:00:02:a2:3c}
	I0327 20:38:11.557693  462394 main.go:141] libmachine: (multinode-571739) DBG | domain multinode-571739 has defined IP address 192.168.39.16 and MAC address 52:54:00:02:a2:3c in network mk-multinode-571739
	I0327 20:38:11.557835  462394 host.go:66] Checking if "multinode-571739" exists ...
	I0327 20:38:11.558176  462394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 20:38:11.558234  462394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 20:38:11.572863  462394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41111
	I0327 20:38:11.573234  462394 main.go:141] libmachine: () Calling .GetVersion
	I0327 20:38:11.573658  462394 main.go:141] libmachine: Using API Version  1
	I0327 20:38:11.573678  462394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 20:38:11.574005  462394 main.go:141] libmachine: () Calling .GetMachineName
	I0327 20:38:11.574172  462394 main.go:141] libmachine: (multinode-571739) Calling .DriverName
	I0327 20:38:11.574374  462394 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0327 20:38:11.574396  462394 main.go:141] libmachine: (multinode-571739) Calling .GetSSHHostname
	I0327 20:38:11.577243  462394 main.go:141] libmachine: (multinode-571739) DBG | domain multinode-571739 has defined MAC address 52:54:00:02:a2:3c in network mk-multinode-571739
	I0327 20:38:11.577669  462394 main.go:141] libmachine: (multinode-571739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:a2:3c", ip: ""} in network mk-multinode-571739: {Iface:virbr1 ExpiryTime:2024-03-27 21:35:51 +0000 UTC Type:0 Mac:52:54:00:02:a2:3c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-571739 Clientid:01:52:54:00:02:a2:3c}
	I0327 20:38:11.577694  462394 main.go:141] libmachine: (multinode-571739) DBG | domain multinode-571739 has defined IP address 192.168.39.16 and MAC address 52:54:00:02:a2:3c in network mk-multinode-571739
	I0327 20:38:11.577829  462394 main.go:141] libmachine: (multinode-571739) Calling .GetSSHPort
	I0327 20:38:11.577966  462394 main.go:141] libmachine: (multinode-571739) Calling .GetSSHKeyPath
	I0327 20:38:11.578112  462394 main.go:141] libmachine: (multinode-571739) Calling .GetSSHUsername
	I0327 20:38:11.578269  462394 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17735-432634/.minikube/machines/multinode-571739/id_rsa Username:docker}
	I0327 20:38:11.656715  462394 ssh_runner.go:195] Run: systemctl --version
	I0327 20:38:11.664553  462394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 20:38:11.683905  462394 kubeconfig.go:125] found "multinode-571739" server: "https://192.168.39.16:8443"
	I0327 20:38:11.683955  462394 api_server.go:166] Checking apiserver status ...
	I0327 20:38:11.684002  462394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 20:38:11.698824  462394 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1117/cgroup
	W0327 20:38:11.709570  462394 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1117/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0327 20:38:11.709626  462394 ssh_runner.go:195] Run: ls
	I0327 20:38:11.714752  462394 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0327 20:38:11.722852  462394 api_server.go:279] https://192.168.39.16:8443/healthz returned 200:
	ok
	I0327 20:38:11.722875  462394 status.go:422] multinode-571739 apiserver status = Running (err=<nil>)
	I0327 20:38:11.722885  462394 status.go:257] multinode-571739 status: &{Name:multinode-571739 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0327 20:38:11.722924  462394 status.go:255] checking status of multinode-571739-m02 ...
	I0327 20:38:11.723224  462394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 20:38:11.723264  462394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 20:38:11.738864  462394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46735
	I0327 20:38:11.739277  462394 main.go:141] libmachine: () Calling .GetVersion
	I0327 20:38:11.739744  462394 main.go:141] libmachine: Using API Version  1
	I0327 20:38:11.739767  462394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 20:38:11.740082  462394 main.go:141] libmachine: () Calling .GetMachineName
	I0327 20:38:11.740233  462394 main.go:141] libmachine: (multinode-571739-m02) Calling .GetState
	I0327 20:38:11.741639  462394 status.go:330] multinode-571739-m02 host status = "Running" (err=<nil>)
	I0327 20:38:11.741658  462394 host.go:66] Checking if "multinode-571739-m02" exists ...
	I0327 20:38:11.741984  462394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 20:38:11.742028  462394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 20:38:11.757252  462394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35725
	I0327 20:38:11.757709  462394 main.go:141] libmachine: () Calling .GetVersion
	I0327 20:38:11.758176  462394 main.go:141] libmachine: Using API Version  1
	I0327 20:38:11.758203  462394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 20:38:11.758604  462394 main.go:141] libmachine: () Calling .GetMachineName
	I0327 20:38:11.758854  462394 main.go:141] libmachine: (multinode-571739-m02) Calling .GetIP
	I0327 20:38:11.761509  462394 main.go:141] libmachine: (multinode-571739-m02) DBG | domain multinode-571739-m02 has defined MAC address 52:54:00:09:29:8f in network mk-multinode-571739
	I0327 20:38:11.761963  462394 main.go:141] libmachine: (multinode-571739-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:29:8f", ip: ""} in network mk-multinode-571739: {Iface:virbr1 ExpiryTime:2024-03-27 21:36:52 +0000 UTC Type:0 Mac:52:54:00:09:29:8f Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:multinode-571739-m02 Clientid:01:52:54:00:09:29:8f}
	I0327 20:38:11.761999  462394 main.go:141] libmachine: (multinode-571739-m02) DBG | domain multinode-571739-m02 has defined IP address 192.168.39.179 and MAC address 52:54:00:09:29:8f in network mk-multinode-571739
	I0327 20:38:11.762080  462394 host.go:66] Checking if "multinode-571739-m02" exists ...
	I0327 20:38:11.762411  462394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 20:38:11.762459  462394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 20:38:11.781670  462394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33127
	I0327 20:38:11.782135  462394 main.go:141] libmachine: () Calling .GetVersion
	I0327 20:38:11.782639  462394 main.go:141] libmachine: Using API Version  1
	I0327 20:38:11.782659  462394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 20:38:11.783005  462394 main.go:141] libmachine: () Calling .GetMachineName
	I0327 20:38:11.783190  462394 main.go:141] libmachine: (multinode-571739-m02) Calling .DriverName
	I0327 20:38:11.783386  462394 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0327 20:38:11.783406  462394 main.go:141] libmachine: (multinode-571739-m02) Calling .GetSSHHostname
	I0327 20:38:11.786030  462394 main.go:141] libmachine: (multinode-571739-m02) DBG | domain multinode-571739-m02 has defined MAC address 52:54:00:09:29:8f in network mk-multinode-571739
	I0327 20:38:11.786421  462394 main.go:141] libmachine: (multinode-571739-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:29:8f", ip: ""} in network mk-multinode-571739: {Iface:virbr1 ExpiryTime:2024-03-27 21:36:52 +0000 UTC Type:0 Mac:52:54:00:09:29:8f Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:multinode-571739-m02 Clientid:01:52:54:00:09:29:8f}
	I0327 20:38:11.786455  462394 main.go:141] libmachine: (multinode-571739-m02) DBG | domain multinode-571739-m02 has defined IP address 192.168.39.179 and MAC address 52:54:00:09:29:8f in network mk-multinode-571739
	I0327 20:38:11.786645  462394 main.go:141] libmachine: (multinode-571739-m02) Calling .GetSSHPort
	I0327 20:38:11.786818  462394 main.go:141] libmachine: (multinode-571739-m02) Calling .GetSSHKeyPath
	I0327 20:38:11.787007  462394 main.go:141] libmachine: (multinode-571739-m02) Calling .GetSSHUsername
	I0327 20:38:11.787292  462394 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17735-432634/.minikube/machines/multinode-571739-m02/id_rsa Username:docker}
	I0327 20:38:11.868495  462394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 20:38:11.886014  462394 status.go:257] multinode-571739-m02 status: &{Name:multinode-571739-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0327 20:38:11.886056  462394 status.go:255] checking status of multinode-571739-m03 ...
	I0327 20:38:11.886394  462394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 20:38:11.886442  462394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 20:38:11.902061  462394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45263
	I0327 20:38:11.902573  462394 main.go:141] libmachine: () Calling .GetVersion
	I0327 20:38:11.903132  462394 main.go:141] libmachine: Using API Version  1
	I0327 20:38:11.903167  462394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 20:38:11.903590  462394 main.go:141] libmachine: () Calling .GetMachineName
	I0327 20:38:11.903805  462394 main.go:141] libmachine: (multinode-571739-m03) Calling .GetState
	I0327 20:38:11.905845  462394 status.go:330] multinode-571739-m03 host status = "Stopped" (err=<nil>)
	I0327 20:38:11.905866  462394 status.go:343] host is not running, skipping remaining checks
	I0327 20:38:11.905872  462394 status.go:257] multinode-571739-m03 status: &{Name:multinode-571739-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.32s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (25.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-571739 node start m03 -v=7 --alsologtostderr: (25.181403946s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (25.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (292.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-571739
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-571739
E0327 20:38:47.628498  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.crt: no such file or directory
E0327 20:40:44.580800  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.crt: no such file or directory
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-571739: (3m5.14116771s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-571739 --wait=true -v=8 --alsologtostderr
E0327 20:42:07.464992  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/functional-870702/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-571739 --wait=true -v=8 --alsologtostderr: (1m47.547103279s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-571739
--- PASS: TestMultiNode/serial/RestartKeepsNodes (292.81s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-571739 node delete m03: (1.799720766s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.37s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (184.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 stop
E0327 20:45:10.510832  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/functional-870702/client.crt: no such file or directory
E0327 20:45:44.580671  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.crt: no such file or directory
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-571739 stop: (3m3.924029817s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-571739 status: exit status 7 (101.046056ms)

                                                
                                                
-- stdout --
	multinode-571739
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-571739-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-571739 status --alsologtostderr: exit status 7 (93.151388ms)

                                                
                                                
-- stdout --
	multinode-571739
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-571739-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 20:46:37.005435  464420 out.go:291] Setting OutFile to fd 1 ...
	I0327 20:46:37.005849  464420 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 20:46:37.005900  464420 out.go:304] Setting ErrFile to fd 2...
	I0327 20:46:37.005918  464420 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 20:46:37.006409  464420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-432634/.minikube/bin
	I0327 20:46:37.007029  464420 out.go:298] Setting JSON to false
	I0327 20:46:37.007070  464420 mustload.go:65] Loading cluster: multinode-571739
	I0327 20:46:37.007175  464420 notify.go:220] Checking for updates...
	I0327 20:46:37.007563  464420 config.go:182] Loaded profile config "multinode-571739": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 20:46:37.007591  464420 status.go:255] checking status of multinode-571739 ...
	I0327 20:46:37.007963  464420 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 20:46:37.008024  464420 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 20:46:37.022572  464420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35005
	I0327 20:46:37.022945  464420 main.go:141] libmachine: () Calling .GetVersion
	I0327 20:46:37.023619  464420 main.go:141] libmachine: Using API Version  1
	I0327 20:46:37.023653  464420 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 20:46:37.024030  464420 main.go:141] libmachine: () Calling .GetMachineName
	I0327 20:46:37.024227  464420 main.go:141] libmachine: (multinode-571739) Calling .GetState
	I0327 20:46:37.025907  464420 status.go:330] multinode-571739 host status = "Stopped" (err=<nil>)
	I0327 20:46:37.025927  464420 status.go:343] host is not running, skipping remaining checks
	I0327 20:46:37.025935  464420 status.go:257] multinode-571739 status: &{Name:multinode-571739 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0327 20:46:37.026029  464420 status.go:255] checking status of multinode-571739-m02 ...
	I0327 20:46:37.026362  464420 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 20:46:37.026396  464420 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 20:46:37.040667  464420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36443
	I0327 20:46:37.041063  464420 main.go:141] libmachine: () Calling .GetVersion
	I0327 20:46:37.041652  464420 main.go:141] libmachine: Using API Version  1
	I0327 20:46:37.041679  464420 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 20:46:37.042035  464420 main.go:141] libmachine: () Calling .GetMachineName
	I0327 20:46:37.042229  464420 main.go:141] libmachine: (multinode-571739-m02) Calling .GetState
	I0327 20:46:37.043671  464420 status.go:330] multinode-571739-m02 host status = "Stopped" (err=<nil>)
	I0327 20:46:37.043686  464420 status.go:343] host is not running, skipping remaining checks
	I0327 20:46:37.043692  464420 status.go:257] multinode-571739-m02 status: &{Name:multinode-571739-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (184.12s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (82.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-571739 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0327 20:47:07.465406  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/functional-870702/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-571739 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m21.927203529s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571739 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (82.47s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (48.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-571739
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-571739-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-571739-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (77.207467ms)

                                                
                                                
-- stdout --
	* [multinode-571739-m02] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17735
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17735-432634/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-432634/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-571739-m02' is duplicated with machine name 'multinode-571739-m02' in profile 'multinode-571739'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-571739-m03 --driver=kvm2  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-571739-m03 --driver=kvm2  --container-runtime=containerd: (46.918930612s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-571739
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-571739: exit status 80 (229.431011ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-571739 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-571739-m03 already exists in multinode-571739-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-571739-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (48.15s)

                                                
                                    
x
+
TestPreload (262.41s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-634597 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0327 20:50:44.580858  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-634597 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m57.805139418s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-634597 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-634597
E0327 20:52:07.466777  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/functional-870702/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-634597: (1m32.457392761s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-634597 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-634597 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (50.015686758s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-634597 image list
helpers_test.go:175: Cleaning up "test-preload-634597" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-634597
--- PASS: TestPreload (262.41s)

                                                
                                    
x
+
TestScheduledStopUnix (116.52s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-426788 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-426788 --memory=2048 --driver=kvm2  --container-runtime=containerd: (44.663619796s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-426788 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-426788 -n scheduled-stop-426788
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-426788 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-426788 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-426788 -n scheduled-stop-426788
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-426788
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-426788 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-426788
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-426788: exit status 7 (86.024085ms)

                                                
                                                
-- stdout --
	scheduled-stop-426788
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-426788 -n scheduled-stop-426788
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-426788 -n scheduled-stop-426788: exit status 7 (74.505476ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-426788" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-426788
--- PASS: TestScheduledStopUnix (116.52s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (207.14s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2980124758 start -p running-upgrade-077761 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E0327 20:55:27.629023  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.crt: no such file or directory
E0327 20:55:44.580803  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2980124758 start -p running-upgrade-077761 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m13.580588924s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-077761 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-077761 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m11.555788235s)
helpers_test.go:175: Cleaning up "running-upgrade-077761" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-077761
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-077761: (1.064999808s)
--- PASS: TestRunningBinaryUpgrade (207.14s)

                                                
                                    
x
+
TestKubernetesUpgrade (188.13s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-232971 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-232971 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m5.400044219s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-232971
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-232971: (2.337894694s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-232971 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-232971 status --format={{.Host}}: exit status 7 (83.575499ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-232971 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-232971 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m35.907203156s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-232971 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-232971 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-232971 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (101.536197ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-232971] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17735
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17735-432634/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-432634/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-232971
	    minikube start -p kubernetes-upgrade-232971 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2329712 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-232971 --kubernetes-version=v1.30.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-232971 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-232971 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (23.250207215s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-232971" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-232971
--- PASS: TestKubernetesUpgrade (188.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-055357 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-055357 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (97.360992ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-055357] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17735
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17735-432634/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-432634/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (97.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-055357 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-055357 --driver=kvm2  --container-runtime=containerd: (1m37.131474769s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-055357 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (97.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (6.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-443810 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-443810 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (150.36179ms)

                                                
                                                
-- stdout --
	* [false-443810] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17735
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17735-432634/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-432634/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 20:56:01.795666  468834 out.go:291] Setting OutFile to fd 1 ...
	I0327 20:56:01.796057  468834 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 20:56:01.796072  468834 out.go:304] Setting ErrFile to fd 2...
	I0327 20:56:01.796078  468834 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 20:56:01.796417  468834 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-432634/.minikube/bin
	I0327 20:56:01.797282  468834 out.go:298] Setting JSON to false
	I0327 20:56:01.798728  468834 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":16714,"bootTime":1711556248,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0327 20:56:01.798830  468834 start.go:139] virtualization: kvm guest
	I0327 20:56:01.801479  468834 out.go:177] * [false-443810] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0327 20:56:01.803379  468834 out.go:177]   - MINIKUBE_LOCATION=17735
	I0327 20:56:01.803435  468834 notify.go:220] Checking for updates...
	I0327 20:56:01.805151  468834 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 20:56:01.806906  468834 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17735-432634/kubeconfig
	I0327 20:56:01.808384  468834 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-432634/.minikube
	I0327 20:56:01.809704  468834 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0327 20:56:01.811282  468834 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 20:56:01.813269  468834 config.go:182] Loaded profile config "NoKubernetes-055357": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 20:56:01.813452  468834 config.go:182] Loaded profile config "offline-containerd-032736": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 20:56:01.813569  468834 config.go:182] Loaded profile config "running-upgrade-077761": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I0327 20:56:01.813694  468834 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 20:56:01.858171  468834 out.go:177] * Using the kvm2 driver based on user configuration
	I0327 20:56:01.859496  468834 start.go:297] selected driver: kvm2
	I0327 20:56:01.859515  468834 start.go:901] validating driver "kvm2" against <nil>
	I0327 20:56:01.859531  468834 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 20:56:01.861730  468834 out.go:177] 
	W0327 20:56:01.863155  468834 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0327 20:56:01.864553  468834 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-443810 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-443810

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-443810

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-443810

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-443810

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-443810

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-443810

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-443810

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-443810

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-443810

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-443810

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443810"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443810"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443810"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-443810

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443810"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443810"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-443810" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-443810" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-443810" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-443810" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-443810" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-443810" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-443810" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-443810" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443810"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443810"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443810"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443810"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443810"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-443810" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-443810" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-443810" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443810"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443810"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443810"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443810"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443810"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-443810

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443810"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443810"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443810"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443810"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443810"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443810"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443810"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443810"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443810"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443810"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443810"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443810"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443810"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443810"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443810"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443810"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443810"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443810"

                                                
                                                
----------------------- debugLogs end: false-443810 [took: 5.916755613s] --------------------------------
helpers_test.go:175: Cleaning up "false-443810" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-443810
--- PASS: TestNetworkPlugins/group/false (6.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (50.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-055357 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-055357 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (49.315113081s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-055357 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-055357 status -o json: exit status 2 (277.648021ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-055357","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-055357
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (50.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (36.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-055357 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-055357 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (36.962865752s)
--- PASS: TestNoKubernetes/serial/Start (36.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-055357 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-055357 "sudo systemctl is-active --quiet service kubelet": exit status 1 (221.843946ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (18.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (17.327134096s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.604895822s)
--- PASS: TestNoKubernetes/serial/ProfileList (18.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-055357
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-055357: (1.393701864s)
--- PASS: TestNoKubernetes/serial/Stop (1.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (30.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-055357 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-055357 --driver=kvm2  --container-runtime=containerd: (30.8767667s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (30.88s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (197.31s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2127932016 start -p stopped-upgrade-779366 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2127932016 start -p stopped-upgrade-779366 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m23.259275685s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2127932016 -p stopped-upgrade-779366 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2127932016 -p stopped-upgrade-779366 stop: (2.403884922s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-779366 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-779366 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m51.648753664s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (197.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-055357 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-055357 "sudo systemctl is-active --quiet service kubelet": exit status 1 (231.127396ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestPause/serial/Start (119.71s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-131379 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-131379 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m59.705947968s)
--- PASS: TestPause/serial/Start (119.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (101.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-443810 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-443810 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m41.131581504s)
--- PASS: TestNetworkPlugins/group/auto/Start (101.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (89.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-443810 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
E0327 21:01:50.511710  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/functional-870702/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-443810 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m29.600505575s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (89.60s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.28s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-779366
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-779366: (1.280332261s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (104.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-443810 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
E0327 21:02:07.465853  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/functional-870702/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-443810 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m44.033866734s)
--- PASS: TestNetworkPlugins/group/calico/Start (104.03s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (59.87s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-131379 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-131379 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (59.842025303s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (59.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-lsjxc" [5e2e92e2-2933-438d-b9c6-26c8eee7db2e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004617319s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-443810 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-443810 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-pqffd" [7ad43877-d6e0-4c3a-af7d-827a6c4f90e5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-pqffd" [7ad43877-d6e0-4c3a-af7d-827a6c4f90e5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.005227196s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-443810 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-443810 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-k7whp" [5231acb1-7553-448d-8536-98c3b183cd29] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-k7whp" [5231acb1-7553-448d-8536-98c3b183cd29] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.005499084s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-443810 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-443810 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-443810 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-443810 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-443810 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-443810 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestPause/serial/Pause (1.05s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-131379 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-131379 --alsologtostderr -v=5: (1.050283265s)
--- PASS: TestPause/serial/Pause (1.05s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-131379 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-131379 --output=json --layout=cluster: exit status 2 (329.78395ms)

                                                
                                                
-- stdout --
	{"Name":"pause-131379","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-131379","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.02s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-131379 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-amd64 unpause -p pause-131379 --alsologtostderr -v=5: (1.023957949s)
--- PASS: TestPause/serial/Unpause (1.02s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.13s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-131379 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-131379 --alsologtostderr -v=5: (1.131385693s)
--- PASS: TestPause/serial/PauseAgain (1.13s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.97s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-131379 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.97s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.76s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (85.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-443810 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-443810 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m25.524736163s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (85.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (110.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-443810 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-443810 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m50.699884773s)
--- PASS: TestNetworkPlugins/group/flannel/Start (110.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (152.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-443810 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-443810 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (2m32.773450824s)
--- PASS: TestNetworkPlugins/group/bridge/Start (152.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-6wltj" [84d3210c-7c31-43e8-bf5f-b9c9865bafe0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006210468s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-443810 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-443810 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-q8n92" [291ddd47-96da-421f-b1fd-c60414df22a6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-q8n92" [291ddd47-96da-421f-b1fd-c60414df22a6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.007868854s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-443810 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-443810 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-443810 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (91.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-443810 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-443810 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m31.981002457s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (91.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-443810 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-443810 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-f5b8d" [675ad223-0f70-4945-a72e-71ae8c10e6bd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-f5b8d" [675ad223-0f70-4945-a72e-71ae8c10e6bd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.005718212s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-443810 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-443810 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-443810 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-kcdxf" [86624597-6aad-45b1-af5a-b42e3fd7346a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004710301s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (182.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-361889 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-361889 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (3m2.579008039s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (182.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-443810 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-443810 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fpsq7" [32217467-7fbc-4b47-b7cf-72a291f3b790] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-fpsq7" [32217467-7fbc-4b47-b7cf-72a291f3b790] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004696317s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-443810 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-443810 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-443810 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-443810 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-443810 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gtjbq" [b3223eff-f80f-4b89-b452-c44f25d27efb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-gtjbq" [b3223eff-f80f-4b89-b452-c44f25d27efb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 14.005971226s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (129.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-255765 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-255765 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0: (2m9.070267771s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (129.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-443810 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-443810 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ml4x5" [b9ebfbfb-6bdd-4666-b13d-ed8be8960ca9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-ml4x5" [b9ebfbfb-6bdd-4666-b13d-ed8be8960ca9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004056011s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-443810 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-443810 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-443810 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-443810 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-443810 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-443810 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)
E0327 21:14:07.294625  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/calico-443810/client.crt: no such file or directory
E0327 21:14:47.053216  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/custom-flannel-443810/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (107.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-813384 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-813384 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3: (1m47.719600405s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (107.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (81.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-481374 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0
E0327 21:07:07.465163  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/functional-870702/client.crt: no such file or directory
E0327 21:07:48.987173  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/kindnet-443810/client.crt: no such file or directory
E0327 21:07:48.992467  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/kindnet-443810/client.crt: no such file or directory
E0327 21:07:49.002824  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/kindnet-443810/client.crt: no such file or directory
E0327 21:07:49.023158  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/kindnet-443810/client.crt: no such file or directory
E0327 21:07:49.063565  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/kindnet-443810/client.crt: no such file or directory
E0327 21:07:49.144004  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/kindnet-443810/client.crt: no such file or directory
E0327 21:07:49.304621  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/kindnet-443810/client.crt: no such file or directory
E0327 21:07:49.625603  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/kindnet-443810/client.crt: no such file or directory
E0327 21:07:50.265811  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/kindnet-443810/client.crt: no such file or directory
E0327 21:07:51.546982  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/kindnet-443810/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-481374 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0: (1m21.915963924s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (81.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-481374 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-481374 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.126532952s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-481374 --alsologtostderr -v=3
E0327 21:07:54.107538  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/kindnet-443810/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-481374 --alsologtostderr -v=3: (2.377150048s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-481374 -n newest-cni-481374
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-481374 -n newest-cni-481374: exit status 7 (87.271838ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-481374 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (33.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-481374 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0
E0327 21:07:57.411491  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/auto-443810/client.crt: no such file or directory
E0327 21:07:57.416854  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/auto-443810/client.crt: no such file or directory
E0327 21:07:57.427162  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/auto-443810/client.crt: no such file or directory
E0327 21:07:57.447503  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/auto-443810/client.crt: no such file or directory
E0327 21:07:57.487961  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/auto-443810/client.crt: no such file or directory
E0327 21:07:57.568361  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/auto-443810/client.crt: no such file or directory
E0327 21:07:57.729339  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/auto-443810/client.crt: no such file or directory
E0327 21:07:58.050539  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/auto-443810/client.crt: no such file or directory
E0327 21:07:58.691697  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/auto-443810/client.crt: no such file or directory
E0327 21:07:59.228788  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/kindnet-443810/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-481374 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0: (33.031476241s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-481374 -n newest-cni-481374
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (33.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-255765 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [12c0d348-a65a-4bfc-8184-12002f950f80] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0327 21:07:59.972531  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/auto-443810/client.crt: no such file or directory
helpers_test.go:344: "busybox" [12c0d348-a65a-4bfc-8184-12002f950f80] Running
E0327 21:08:02.533463  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/auto-443810/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.005983173s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-255765 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-813384 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ae6566f9-a9d9-48cb-a0ad-a8437b24203a] Pending
E0327 21:08:07.654614  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/auto-443810/client.crt: no such file or directory
helpers_test.go:344: "busybox" [ae6566f9-a9d9-48cb-a0ad-a8437b24203a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ae6566f9-a9d9-48cb-a0ad-a8437b24203a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.005953856s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-813384 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-255765 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-255765 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.136288342s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-255765 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (92.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-255765 --alsologtostderr -v=3
E0327 21:08:09.469713  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/kindnet-443810/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-255765 --alsologtostderr -v=3: (1m32.567620546s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (92.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-813384 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-813384 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.051431278s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-813384 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (92.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-813384 --alsologtostderr -v=3
E0327 21:08:17.895493  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/auto-443810/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-813384 --alsologtostderr -v=3: (1m32.56459226s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (92.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-361889 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a571cda6-68b4-4b4d-aed5-69bd53068757] Pending
helpers_test.go:344: "busybox" [a571cda6-68b4-4b4d-aed5-69bd53068757] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a571cda6-68b4-4b4d-aed5-69bd53068757] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.005889785s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-361889 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-481374 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-361889 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-361889 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.025108058s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-361889 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-481374 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-481374 -n newest-cni-481374
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-481374 -n newest-cni-481374: exit status 2 (298.211172ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-481374 -n newest-cni-481374
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-481374 -n newest-cni-481374: exit status 2 (285.968888ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-481374 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-481374 -n newest-cni-481374
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-481374 -n newest-cni-481374
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (92.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-361889 --alsologtostderr -v=3
E0327 21:08:29.950608  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/kindnet-443810/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-361889 --alsologtostderr -v=3: (1m32.574634491s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (92.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (61.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-479307 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3
E0327 21:08:38.376318  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/auto-443810/client.crt: no such file or directory
E0327 21:08:39.608525  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/calico-443810/client.crt: no such file or directory
E0327 21:08:39.613911  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/calico-443810/client.crt: no such file or directory
E0327 21:08:39.624240  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/calico-443810/client.crt: no such file or directory
E0327 21:08:39.644640  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/calico-443810/client.crt: no such file or directory
E0327 21:08:39.685795  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/calico-443810/client.crt: no such file or directory
E0327 21:08:39.766264  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/calico-443810/client.crt: no such file or directory
E0327 21:08:39.926754  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/calico-443810/client.crt: no such file or directory
E0327 21:08:40.247486  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/calico-443810/client.crt: no such file or directory
E0327 21:08:40.887711  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/calico-443810/client.crt: no such file or directory
E0327 21:08:42.168625  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/calico-443810/client.crt: no such file or directory
E0327 21:08:44.729142  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/calico-443810/client.crt: no such file or directory
E0327 21:08:49.850243  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/calico-443810/client.crt: no such file or directory
E0327 21:09:00.091112  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/calico-443810/client.crt: no such file or directory
E0327 21:09:10.910816  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/kindnet-443810/client.crt: no such file or directory
E0327 21:09:19.336728  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/auto-443810/client.crt: no such file or directory
E0327 21:09:20.572308  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/calico-443810/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-479307 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3: (1m1.201642734s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (61.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-479307 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ea0bc0dd-be22-4478-8407-380dd17dd3f4] Pending
helpers_test.go:344: "busybox" [ea0bc0dd-be22-4478-8407-380dd17dd3f4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ea0bc0dd-be22-4478-8407-380dd17dd3f4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.005161732s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-479307 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-255765 -n no-preload-255765
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-255765 -n no-preload-255765: exit status 7 (84.983123ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-255765 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (317.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-255765 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-255765 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0: (5m17.533327311s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-255765 -n no-preload-255765
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (317.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-479307 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-479307 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.078297614s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-479307 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (92.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-479307 --alsologtostderr -v=3
E0327 21:09:47.052942  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/custom-flannel-443810/client.crt: no such file or directory
E0327 21:09:47.058240  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/custom-flannel-443810/client.crt: no such file or directory
E0327 21:09:47.068658  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/custom-flannel-443810/client.crt: no such file or directory
E0327 21:09:47.088972  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/custom-flannel-443810/client.crt: no such file or directory
E0327 21:09:47.129362  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/custom-flannel-443810/client.crt: no such file or directory
E0327 21:09:47.209741  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/custom-flannel-443810/client.crt: no such file or directory
E0327 21:09:47.369953  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/custom-flannel-443810/client.crt: no such file or directory
E0327 21:09:47.691061  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/custom-flannel-443810/client.crt: no such file or directory
E0327 21:09:48.331594  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/custom-flannel-443810/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-479307 --alsologtostderr -v=3: (1m32.57719318s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (92.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-813384 -n default-k8s-diff-port-813384
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-813384 -n default-k8s-diff-port-813384: exit status 7 (87.644894ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-813384 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (307.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-813384 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3
E0327 21:09:49.612330  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/custom-flannel-443810/client.crt: no such file or directory
E0327 21:09:52.172742  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/custom-flannel-443810/client.crt: no such file or directory
E0327 21:09:57.293204  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/custom-flannel-443810/client.crt: no such file or directory
E0327 21:10:01.533444  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/calico-443810/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-813384 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3: (5m7.365674275s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-813384 -n default-k8s-diff-port-813384
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (307.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-361889 -n old-k8s-version-361889
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-361889 -n old-k8s-version-361889: exit status 7 (100.637679ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-361889 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (208.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-361889 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
E0327 21:10:07.534383  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/custom-flannel-443810/client.crt: no such file or directory
E0327 21:10:14.685180  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/flannel-443810/client.crt: no such file or directory
E0327 21:10:14.690516  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/flannel-443810/client.crt: no such file or directory
E0327 21:10:14.700781  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/flannel-443810/client.crt: no such file or directory
E0327 21:10:14.721141  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/flannel-443810/client.crt: no such file or directory
E0327 21:10:14.761527  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/flannel-443810/client.crt: no such file or directory
E0327 21:10:14.841970  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/flannel-443810/client.crt: no such file or directory
E0327 21:10:15.003063  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/flannel-443810/client.crt: no such file or directory
E0327 21:10:15.323695  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/flannel-443810/client.crt: no such file or directory
E0327 21:10:15.964877  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/flannel-443810/client.crt: no such file or directory
E0327 21:10:17.245471  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/flannel-443810/client.crt: no such file or directory
E0327 21:10:19.806008  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/flannel-443810/client.crt: no such file or directory
E0327 21:10:24.927063  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/flannel-443810/client.crt: no such file or directory
E0327 21:10:28.014782  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/custom-flannel-443810/client.crt: no such file or directory
E0327 21:10:32.831531  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/kindnet-443810/client.crt: no such file or directory
E0327 21:10:35.167381  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/flannel-443810/client.crt: no such file or directory
E0327 21:10:41.257296  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/auto-443810/client.crt: no such file or directory
E0327 21:10:44.580585  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.crt: no such file or directory
E0327 21:10:47.685482  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/enable-default-cni-443810/client.crt: no such file or directory
E0327 21:10:47.690864  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/enable-default-cni-443810/client.crt: no such file or directory
E0327 21:10:47.701187  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/enable-default-cni-443810/client.crt: no such file or directory
E0327 21:10:47.721546  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/enable-default-cni-443810/client.crt: no such file or directory
E0327 21:10:47.761941  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/enable-default-cni-443810/client.crt: no such file or directory
E0327 21:10:47.842325  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/enable-default-cni-443810/client.crt: no such file or directory
E0327 21:10:48.003310  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/enable-default-cni-443810/client.crt: no such file or directory
E0327 21:10:48.323954  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/enable-default-cni-443810/client.crt: no such file or directory
E0327 21:10:48.964599  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/enable-default-cni-443810/client.crt: no such file or directory
E0327 21:10:50.245045  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/enable-default-cni-443810/client.crt: no such file or directory
E0327 21:10:52.805776  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/enable-default-cni-443810/client.crt: no such file or directory
E0327 21:10:55.648635  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/flannel-443810/client.crt: no such file or directory
E0327 21:10:57.926808  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/enable-default-cni-443810/client.crt: no such file or directory
E0327 21:11:00.797183  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/bridge-443810/client.crt: no such file or directory
E0327 21:11:00.802495  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/bridge-443810/client.crt: no such file or directory
E0327 21:11:00.812829  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/bridge-443810/client.crt: no such file or directory
E0327 21:11:00.833184  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/bridge-443810/client.crt: no such file or directory
E0327 21:11:00.873551  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/bridge-443810/client.crt: no such file or directory
E0327 21:11:00.954058  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/bridge-443810/client.crt: no such file or directory
E0327 21:11:01.114498  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/bridge-443810/client.crt: no such file or directory
E0327 21:11:01.434762  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/bridge-443810/client.crt: no such file or directory
E0327 21:11:02.075810  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/bridge-443810/client.crt: no such file or directory
E0327 21:11:03.356785  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/bridge-443810/client.crt: no such file or directory
E0327 21:11:05.917885  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/bridge-443810/client.crt: no such file or directory
E0327 21:11:08.167441  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/enable-default-cni-443810/client.crt: no such file or directory
E0327 21:11:08.975177  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/custom-flannel-443810/client.crt: no such file or directory
E0327 21:11:11.038160  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/bridge-443810/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-361889 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (3m28.467714739s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-361889 -n old-k8s-version-361889
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (208.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-479307 -n embed-certs-479307
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-479307 -n embed-certs-479307: exit status 7 (96.60478ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-479307 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (297.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-479307 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3
E0327 21:11:21.279460  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/bridge-443810/client.crt: no such file or directory
E0327 21:11:23.454415  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/calico-443810/client.crt: no such file or directory
E0327 21:11:28.647758  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/enable-default-cni-443810/client.crt: no such file or directory
E0327 21:11:36.609435  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/flannel-443810/client.crt: no such file or directory
E0327 21:11:41.760534  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/bridge-443810/client.crt: no such file or directory
E0327 21:12:07.465719  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/functional-870702/client.crt: no such file or directory
E0327 21:12:07.630060  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/addons-336680/client.crt: no such file or directory
E0327 21:12:09.608296  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/enable-default-cni-443810/client.crt: no such file or directory
E0327 21:12:22.720963  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/bridge-443810/client.crt: no such file or directory
E0327 21:12:30.895769  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/custom-flannel-443810/client.crt: no such file or directory
E0327 21:12:48.986861  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/kindnet-443810/client.crt: no such file or directory
E0327 21:12:57.411217  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/auto-443810/client.crt: no such file or directory
E0327 21:12:58.529979  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/flannel-443810/client.crt: no such file or directory
E0327 21:13:16.672504  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/kindnet-443810/client.crt: no such file or directory
E0327 21:13:25.097943  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/auto-443810/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-479307 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3: (4m57.09914747s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-479307 -n embed-certs-479307
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (297.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-bcgwj" [03c9084d-584e-4edd-8268-0c7504d2c3e6] Running
E0327 21:13:31.529336  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/enable-default-cni-443810/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004255316s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-bcgwj" [03c9084d-584e-4edd-8268-0c7504d2c3e6] Running
E0327 21:13:39.608866  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/calico-443810/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00403506s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-361889 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-361889 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-361889 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-361889 -n old-k8s-version-361889
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-361889 -n old-k8s-version-361889: exit status 2 (278.64215ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-361889 -n old-k8s-version-361889
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-361889 -n old-k8s-version-361889: exit status 2 (277.63481ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-361889 --alsologtostderr -v=1
E0327 21:13:44.641771  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/bridge-443810/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-361889 -n old-k8s-version-361889
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-361889 -n old-k8s-version-361889
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-dzv6g" [a7f8fb28-d29c-4955-884c-2786e612d357] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005867332s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-n75cd" [23d9eaeb-e135-4fd9-8db4-528b36097bf3] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-779776cb65-n75cd" [23d9eaeb-e135-4fd9-8db4-528b36097bf3] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.005167912s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-dzv6g" [a7f8fb28-d29c-4955-884c-2786e612d357] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005717314s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-813384 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-813384 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-813384 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-813384 -n default-k8s-diff-port-813384
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-813384 -n default-k8s-diff-port-813384: exit status 2 (272.896491ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-813384 -n default-k8s-diff-port-813384
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-813384 -n default-k8s-diff-port-813384: exit status 2 (280.067939ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-813384 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-813384 -n default-k8s-diff-port-813384
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-813384 -n default-k8s-diff-port-813384
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-n75cd" [23d9eaeb-e135-4fd9-8db4-528b36097bf3] Running
E0327 21:15:14.684891  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/flannel-443810/client.crt: no such file or directory
E0327 21:15:14.735977  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/custom-flannel-443810/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004837763s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-255765 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-255765 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-255765 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-255765 -n no-preload-255765
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-255765 -n no-preload-255765: exit status 2 (269.153877ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-255765 -n no-preload-255765
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-255765 -n no-preload-255765: exit status 2 (271.297925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-255765 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-255765 -n no-preload-255765
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-255765 -n no-preload-255765
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-476xp" [5c73865d-61f4-4db8-94f6-1fd55bab4239] Running
E0327 21:16:15.369761  439928 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-432634/.minikube/profiles/enable-default-cni-443810/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005362565s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-476xp" [5c73865d-61f4-4db8-94f6-1fd55bab4239] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005019313s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-479307 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-479307 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-479307 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-479307 -n embed-certs-479307
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-479307 -n embed-certs-479307: exit status 2 (266.543172ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-479307 -n embed-certs-479307
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-479307 -n embed-certs-479307: exit status 2 (274.801911ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-479307 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-479307 -n embed-certs-479307
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-479307 -n embed-certs-479307
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.87s)

                                                
                                    

Test skip (39/333)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.29.3/cached-images 0
15 TestDownloadOnly/v1.29.3/binaries 0
16 TestDownloadOnly/v1.29.3/kubectl 0
23 TestDownloadOnly/v1.30.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.30.0-beta.0/binaries 0
25 TestDownloadOnly/v1.30.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
149 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.03
150 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
151 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.12
153 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
154 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
155 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
156 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.05
157 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
184 TestImageBuild 0
211 TestKicCustomNetwork 0
212 TestKicExistingNetwork 0
213 TestKicCustomSubnet 0
214 TestKicStaticIP 0
246 TestChangeNoneUser 0
249 TestScheduledStopWindows 0
251 TestSkaffold 0
253 TestInsufficientStorage 0
257 TestMissingContainerUpgrade 0
263 TestNetworkPlugins/group/kubenet 3.84
271 TestNetworkPlugins/group/cilium 4.25
286 TestStartStop/group/disable-driver-mounts 0.16
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-443810 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-443810

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-443810

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-443810

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-443810

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-443810

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-443810

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-443810

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-443810

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-443810

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-443810

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443810"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443810"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443810"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-443810

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443810"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443810"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-443810" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-443810" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-443810" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-443810" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-443810" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-443810" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-443810" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-443810" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443810"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443810"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443810"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443810"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443810"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-443810" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-443810" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-443810" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443810"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443810"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443810"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443810"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443810"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-443810

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443810"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443810"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443810"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443810"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443810"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443810"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443810"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443810"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443810"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443810"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443810"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443810"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443810"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443810"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443810"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443810"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443810"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443810"

                                                
                                                
----------------------- debugLogs end: kubenet-443810 [took: 3.659280523s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-443810" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-443810
--- SKIP: TestNetworkPlugins/group/kubenet (3.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-443810 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-443810

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-443810

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-443810

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-443810

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-443810

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-443810

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-443810

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-443810

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-443810

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-443810

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443810"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443810"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443810"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-443810

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443810"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443810"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-443810" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-443810" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-443810" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-443810" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-443810" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-443810" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-443810" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-443810" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443810"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443810"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443810"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443810"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443810"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-443810

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-443810

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-443810" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-443810" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-443810

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-443810

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-443810" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-443810" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-443810" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-443810" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-443810" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443810"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443810"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443810"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443810"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443810"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-443810

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443810"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443810"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443810"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443810"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443810"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443810"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443810"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443810"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443810"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443810"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443810"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443810"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443810"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443810"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443810"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443810"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443810"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-443810" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443810"

                                                
                                                
----------------------- debugLogs end: cilium-443810 [took: 4.074612779s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-443810" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-443810
--- SKIP: TestNetworkPlugins/group/cilium (4.25s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-055836" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-055836
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard